Skip to content
GitLab
Explore
Sign in
Tags
Tags give the ability to mark specific points in history as being important
This project is mirrored from
https://github.com/ggerganov/llama.cpp
. Pull mirroring updated
Sep 19, 2024
.
b3488
75af08c4
·
ggml: bugfix: fix the inactive elements is agnostic for risc-v vector (#8748)
·
Jul 29, 2024
b3487
439b3fc7
·
cuda : organize vendor-specific headers into vendors directory (#8746)
·
Jul 29, 2024
b3486
0832de72
·
[SYCL] add conv support (#8688)
·
Jul 29, 2024
b3485
6eeaeba1
·
cmake: use 1 more thread for non-ggml in CI (#8740)
·
Jul 28, 2024
b3484
4730faca
·
chore : Fix vulkan related compiler warnings, add help text, improve CLI options (#8477)
·
Jul 28, 2024
b3483
4c676c85
·
llama : refactor session file management (#8699)
·
Jul 28, 2024
b3482
e54c35e4
·
feat: Support Moore Threads GPU (#8383)
·
Jul 28, 2024
b3479
345c8c0c
·
ggml : add missing semicolon (#0)
·
Jul 27, 2024
b3472
b5e95468
·
llama : add support for llama 3.1 rope scaling factors (#8676)
·
Jul 27, 2024
b3471
92090eca
·
llama : add function for model-based max number of graph nodes (#8622)
·
Jul 27, 2024
b3470
9d03d085
·
common : add --no-warmup option for main/llama-cli (#8712)
·
Jul 27, 2024
b3469
bfb4c749
·
cann: Fix Multi-NPU execution error (#8710)
·
Jul 27, 2024
b3468
2b1f616b
·
ggml : reduce hash table reset cost (#8698)
·
Jul 27, 2024
b3467
01245f5b
·
llama : fix order of parameters (#8706)
·
Jul 26, 2024
b3465
41cd47ca
·
examples : export-lora : fix issue with quantized base models (#8687)
·
Jul 25, 2024
b3464
49ce0ab6
·
ggml: handle ggml_init failure to fix NULL pointer deref (#8692)
·
Jul 25, 2024
b3463
4226a8d1
·
llama : fix build + fix fabs compile warnings (#8683)
·
Jul 25, 2024
b3462
bf5a81df
·
ggml : fix build on Windows with Snapdragon X (#8531)
·
Jul 25, 2024
b3461
88954f7f
·
tests : fix printfs (#8068)
·
Jul 25, 2024
b3460
ed67bcb2
·
[SYCL] fix multi-gpu issue on sycl (#8554)
·
Jul 25, 2024
Prev
1
…
7
8
9
10
11
12
13
14
15
…
123
Next