Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
This project is mirrored from
https://github.com/huggingface/transformers
. Pull mirroring updated
Sep 19, 2024
.
thomas/add_custom_kernels
46d37bec
·
Add safety in case the entire row is 0
·
Sep 01, 2022
pin-ffspec
5b92eb1c
·
Skip test
·
Aug 31, 2022
thomas/make_tp_bloom_generate_work
6ee02867
·
Dtype should work correctly:
·
Aug 31, 2022
thomas/make_tp_work_with_bloom
cd69d253
·
Nit
·
Aug 26, 2022
test-new-doc-builder-workflow
bd0676e7
·
Random change to test doc building
·
Aug 23, 2022
thomas/dirty_bloom_tp
f4d0dc3c
·
WIP
·
Aug 17, 2022
move_part_2
17dff54a
·
Forgot one new_ for cache migration
·
Aug 05, 2022
int
5cd40323
·
Use new huggingface_hub tools for download models (#18438)
·
Aug 05, 2022
torch_versions
a6937898
·
Fix torch version comparisons
·
Aug 03, 2022
thomas/improve_bloom_generation_speed
53e8738c
·
Woops
·
Aug 02, 2022
thomas/bloom_allow_fp32_lm_head
86919416
·
Woops
·
Aug 01, 2022
improve_error_message_when_transformers_is_misconfigured
a586984a
·
Black version.
·
Aug 01, 2022
thomas/accelerate_gptj
4c21b9e7
·
Use masked_fill instead
·
Jul 31, 2022
thomas/accelerate_gpt2
3df009e4
·
add torch.all in test
·
Jul 31, 2022
thomas/fix_bloom
ba58e5b1
·
Remove unused imports
·
Jul 28, 2022
muellerzr-metrics
4d517a81
·
Start updating all no trainer examples
·
Jul 25, 2022
deberta-xla-fixes
66f648d7
·
make fixup
·
Jul 22, 2022
general_test_low_cpu_mem
c94914d8
·
Let's see what breaks
😬
·
Jul 22, 2022
custom_bloom_kernel
22ddccc4
·
Attempt to use custom cuda kernels for speed up inference for bloom.
·
Jul 21, 2022
nezha_slow
7d3cf2b6
·
Make Nezha tests slow
·
Jul 11, 2022
Prev
1
…
42
43
44
45
46
47
48
49
Next