Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
This project is mirrored from
https://github.com/lucidrains/x-transformers
. Pull mirroring updated
Sep 19, 2024
.
1.28.4
3673f744
·
export feedforward and rmsnorm for another project
·
May 02, 2024
1.28.2
8be5705d
·
also make sure max_attend_past feature works correctly with memory key / values
·
Apr 28, 2024
1.28.1
b46949be
·
address
https://github.com/lucidrains/x-transformers/issues/242
·
Apr 28, 2024
1.28.0
d9ec275e
·
address
https://github.com/lucidrains/x-transformers/issues/242
·
Apr 28, 2024
1.27.22
417a93ed
·
numerical loss should be masked out where the target is not a number token for xval
·
Apr 26, 2024
1.27.21
1bfe0157
·
protect against nan being passed into xval autoregressive wrapper during training
·
Apr 26, 2024
1.27.20
9665f74f
·
address
https://github.com/lucidrains/x-transformers/issues/246
·
Apr 15, 2024
1.27.19
5fa952a2
·
fix get_mask_subset_prob
·
Feb 27, 2024
1.27.18
045ec7ce
·
always apply custom scale if given to Attend, plan on removing qk_norm kwarg later
·
Feb 26, 2024
1.27.17
70c6277b
·
1.27.17
·
Feb 26, 2024
1.27.16
3ca18197
·
address
https://github.com/lucidrains/x-transformers/issues/237
·
Feb 04, 2024
1.27.15
b28d4d53
·
fix dpo
·
Feb 01, 2024
1.27.14
8516a59e
·
quick fix
·
Jan 25, 2024
1.27.12
7f0caee1
·
xpos + mems fix by @pfeatherstone
·
Jan 25, 2024
1.27.11
99c75bed
·
able to return both logits and embeddings for vision transformer wrapper
·
Jan 22, 2024
1.27.10
90cef69e
·
patch
·
Jan 21, 2024
1.27.9
b2979195
·
fix flash attention with num memory key / values + 1 headed kv
·
Jan 16, 2024
1.27.8
029ec319
·
fix memory key values with one write-head paper,...
·
Jan 16, 2024
1.27.7
aa380f17
·
mem masks should be on the continuous transformer wrapper
·
Jan 16, 2024
1.27.6
22f9b41d
·
fix xpos
https://github.com/lucidrains/x-transformers/issues/226#issuecomment-1889938198
·
Jan 12, 2024
Prev
1
2
3
4
5
6
7
8
…
21
Next