Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
This project is mirrored from
https://github.com/lucidrains/imagen-pytorch
. Pull mirroring updated
Sep 19, 2024
.
v0.0.51
a45f8497
·
0.0.51
·
May 30, 2022
v0.0.50
ba0662f1
·
fix how version is handled
·
May 30, 2022
v0.0.49
bc5b3140
·
strengthen attention pooling by including mean pooled projected latents of the original sequence
·
May 29, 2022
v0.0.48
ff0d39fa
·
make unet parameters all kwargs
·
May 28, 2022
v0.0.47
8ceb2db9
·
let the unet take care of the final layernorm of text embeddings
·
May 27, 2022
v0.0.46
f7a80060
·
0.0.46
·
May 27, 2022
v0.0.45
939d10cb
·
0.0.45
·
May 27, 2022
v0.0.44
f36eeea4
·
it seems they always attention pool the text
·
May 27, 2022
v0.0.43
8fbad573
·
complete attention pooling feature using perceiver resampler from flamingo, cite properly
·
May 27, 2022
v0.0.42a
1410db94
·
0.0.42
·
May 27, 2022
v0.0.42
9276ea6c
·
bring in deepminds perceiver resampler for a more advanced attention pooling
·
May 27, 2022
v0.0.41
d1d68e86
·
rip weight decay
·
May 27, 2022
v0.0.40
5f152870
·
solidify some attention related settings, make sure researcher knows text mask must be passed in
·
May 27, 2022
v0.0.39
8ef70245
·
line up more settings
·
May 27, 2022
v0.0.38
a1da79e2
·
0.0.38
·
May 27, 2022
v0.0.37
c2d94941
·
fix bug with get_times in noise scheduler
·
May 26, 2022
v0.0.36
4fbcfbc5
·
line up cross attention settings as in paper, customizable across layers
·
May 26, 2022
v0.0.35
bda8f0b4
·
convenience class for SR unets in latter two stages in cascade
·
May 26, 2022
v0.0.34
1749aa6b
·
0.0.34
·
May 26, 2022
v0.0.33
7b0a1dee
·
final preparation for continuous times, remove an unnecessary clip_denoised...
·
May 26, 2022
Prev
1
…
12
13
14
15
16
17
18
Next