Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
This project is mirrored from
https://github.com/lucidrains/meshgpt-pytorch
. Pull mirroring updated
Sep 19, 2024
.
0.0.47
f9923462
·
take care of no gradient sync unless last step
·
Dec 08, 2023
0.0.46
73770ab2
·
able to customize what model kwarg the output tuple elements of the dataloader...
·
Dec 08, 2023
0.0.45
1603505c
·
checkpoints and gradient accumulation
·
Dec 07, 2023
0.0.44
30b707ce
·
move variable seq len dimension to first dimension, so that collator can treat...
·
Dec 07, 2023
0.0.43
6cbd8f09
·
allow for dataset to return dictionary with keys forwarded to kwargs of model forward
·
Dec 07, 2023
0.0.42
291901c5
·
automatically append eos for mesh transformer without need to pass in...
·
Dec 07, 2023
0.0.41a
d5bfff66
·
oops
·
Dec 07, 2023
0.0.41
506a91b6
·
get ready for meshes to be part of future multimodal project
·
Dec 07, 2023
0.0.40
d5f2760c
·
save step
·
Dec 07, 2023
0.0.39
edd8ac36
·
save and load on both trainers
·
Dec 07, 2023
0.0.38
9a59e94c
·
custom collate fn to take into account variable lengthed faces and to use the...
·
Dec 07, 2023
0.0.37
5d729744
·
make sure basic training loop runs for both autoencoder and transformer
·
Dec 07, 2023
0.0.36
6c11bf39
·
aim for another simplification
·
Dec 07, 2023
0.0.35
74cc0b50
·
aim for another simplification
·
Dec 07, 2023
0.0.34
0f2b164b
·
simpler
·
Dec 07, 2023
0.0.33
c98138b6
·
flash attention, but of course
·
Dec 06, 2023
0.0.32
2932f24f
·
nevermind, it can work
·
Dec 06, 2023
0.0.31
b67bd771
·
nevermind, it can work
·
Dec 06, 2023
0.0.30
20691e9e
·
add linear attention
·
Dec 06, 2023
0.0.29
43ee4e1c
·
oops, residual lfq has no stochastic sampling yet
·
Dec 06, 2023
Prev
1
…
5
6
7
8
9
10
11
Next