Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
This project is mirrored from
https://github.com/jmorganca/ollama.git
. Pull mirroring updated
Sep 19, 2024
.
hallh/fix-586
900e5ddb
·
relay default predict options to llama.cpp
·
Oct 02, 2023
brucemacd/numgpu-logs
6ba02bbe
·
Update llama.go
·
Oct 02, 2023
brucemacd/modelfile-format
23791d7b
·
output type parsed from modelfile
·
Oct 02, 2023
show-default
47d930ba
·
show a default message when license/parameters/system prompt/template aren't specified
·
Oct 02, 2023
mxyng/starcoder
c02c0cd4
·
starcoder
·
Oct 02, 2023
brucemacd/sync-generate
9295e9eb
·
Update routes.go
·
Oct 03, 2023
brucemacd/buffer-size
903d4f1d
·
increase streaming buffer size
·
Oct 03, 2023
brucemacd/linux-q8
9907460e
·
enable q8, q5, 5_1, and f32 for linux gpu
·
Oct 03, 2023
mxyng/concurrent-downloads
2d7eba03
·
names
·
Oct 03, 2023
brucemacd/rename-subprocess
549a26ef
·
windows fix
·
Oct 04, 2023
help-text
9c7d8376
·
revise help text
·
Oct 04, 2023
brucemacd/async-preload
76a965fe
·
display message if the model take a while to load
·
Oct 05, 2023
brucemacd/validate-api-opts
413a9155
·
validate api options fields from map
·
Oct 05, 2023
brucemacd/api-model-not-found-err
0d9da05b
·
not found error before pulling model
·
Oct 06, 2023
brucemacd/create-model-feedback
77a9a117
·
add feedback for reading model metadata
·
Oct 06, 2023
mxyng/http-proxy
2cfffea0
·
handle client proxy
·
Oct 09, 2023
fix-cancel
36c4681f
·
always cleanup blob download
·
Oct 09, 2023
brucemacd/vram-buffer
e11454c0
·
wait for command to exit, no timeout
·
Oct 10, 2023
brucemacd/close-loaded-llm
1c0f7cbd
·
prevent waiting on exited command
·
Oct 10, 2023
brucemacd/failed-llama-runner
305aa4f1
·
Update llama.go
·
Oct 11, 2023
Prev
1
…
3
4
5
6
7
8
9
10
11
…
32
Next