Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
This project is mirrored from
https://github.com/jmorganca/ollama.git
. Pull mirroring updated
Sep 19, 2024
.
emptylayers
bdc01aa5
·
only add a layer if there is actual data
·
Sep 15, 2023
mxyng/materialized-view
bc9573dc
·
Merge pull request #530 from jmorganca/mxyng/progresswriter
·
Sep 15, 2023
brucemacd/subprocess-check-running
76216597
·
simplify by using glob
·
Sep 15, 2023
mxyng/progresswriter
e53bc57d
·
split uploadBlobChunked
·
Sep 14, 2023
mxyng/content-length
aa76f581
·
set request.ContentLength
·
Sep 14, 2023
matt/updatedocs
3eac72b1
·
Update docs/api.md
·
Sep 14, 2023
brucemacd/linux-gpu-multicuda
747a9a40
·
cuda version env var
·
Sep 14, 2023
brucemacd/cuda-env-var
5df0d0ec
·
cuda env var
·
Sep 14, 2023
matt/streamingapi
e2389b63
·
add examples of streaming in python and node
·
Sep 14, 2023
mxyng/falcon-decode
d0288538
·
fix: add falcon.go
·
Sep 13, 2023
python-bindings
0ed358d7
·
add a simple python client to access ollama
·
Sep 12, 2023
mxyng/decode
0c5a4543
·
fix model type for 70b
·
Sep 12, 2023
brucemacd/fix-arm-build
efc7757d
·
fix ggml arm64 cuda build
·
Sep 12, 2023
brucemacd/release-linux
6a6a4519
·
amd64 linux build runner
·
Sep 12, 2023
brucemacd/linux-gpu
e6093f7a
·
use total gpu memory
·
Sep 11, 2023
caps
2cc649f2
·
add model format to config layer
·
Sep 09, 2023
brucemacd/linux-gpu-multiarch
2eacde3e
·
Update generate_linux.sh
·
Sep 08, 2023
autoprune
572b09be
·
add autoprune to remove unused layers
·
Sep 07, 2023
mxyng/generate
9bdd4dee
·
add cuda docker image
·
Sep 07, 2023
mxyng/dockerignore
a8da0bac
·
update dockerignore
·
Sep 07, 2023
Prev
1
…
36
37
38
39
40
41
42
43
Next