Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
This project is mirrored from
https://github.com/lucidrains/vit-pytorch
. Pull mirroring updated
Sep 19, 2024
.
0.6.4
2263b739
·
allow distillable efficient vit to restore efficient vit as well
·
Dec 25, 2020
0.6.3
74074e2b
·
offer easy way to turn DistillableViT to ViT at the end of training
·
Dec 25, 2020
0.6.2
db98ed7a
·
allow for overriding alpha as well on forward in distillation wrapper
·
Dec 24, 2020
0.6.1
1a934da3
·
no grad for teacher in distillation
·
Dec 24, 2020
0.6.0
aa9ed249
·
add knowledge distillation with distillation tokens, in light of new finding from facebook ai
·
Dec 24, 2020
0.5.1
59787a6b
·
allow for mean pool with efficient version too
·
Dec 23, 2020
0.5.0
24339644
·
offer a way to use mean pooling of last layer
·
Dec 23, 2020
0.4.0
b786029e
·
fix the dimension per head to be independent of dim and heads, to make sure...
·
Dec 17, 2020
0.3.0
96241819
·
simplify mlp head
·
Dec 07, 2020
0.2.7
6c8dfc18
·
remove float(-inf) as masking value
·
Nov 13, 2020
0.2.6
7a214d71
·
allow for training on different image sizes, provided images are smaller than...
·
Oct 25, 2020
0.2.5
6d1df1a9
·
more efficient
·
Oct 22, 2020
0.2.4
d65a8c17
·
remove dropout from last linear to logits
·
Oct 16, 2020
0.2.3
f7c164d9
·
assert minimum number of patches
·
Oct 16, 2020
0.2.2
35796104
·
dropouts are more specific and aggressive in the paper, thanks for letting me know @hilach70
·
Oct 14, 2020
0.2.1
b0e4790c
·
bump package
·
Oct 13, 2020
0.2.0
a0fa4107
·
norm cls token before sending to mlp head
·
Oct 10, 2020
0.1.0
ee1cbbad
·
write up example for using efficient transformers
·
Oct 07, 2020
0.0.5
d66b29e4
·
cleanup stray print
·
Oct 07, 2020
0.0.4
f7123720
·
add masking
·
Oct 07, 2020
Prev
1
…
6
7
8
9
10
11
Next