Skip to content
GitLab
Explore
Sign in
Overview
Active
Stale
All
This project is mirrored from
https://github.com/run-llama/create-llama
. Pull mirroring updated
Sep 19, 2024
.
feat/use-3.5-as-default-model
6cb989c4
·
feat: use 3.5 as default model
·
May 09, 2024
lee/chores
b6bd88ec
·
update wrong example system prompt
·
May 08, 2024
lee/use-ingestion-pipeline
68322015
·
add back the old changes
·
May 03, 2024
ingestion-pipeline
b1dfe7af
·
Fix dimensions typo in settings.py
·
May 03, 2024
feat/support-anthropic-and-gemini
99bd012a
·
fix: use own callbackmanager per request
·
May 02, 2024
fix-python-indexing
c3215ccc
·
better log
·
May 02, 2024
0.3.0
16774f58
·
fix: type
·
May 01, 2024
lee/vercel-streaming
73dfac54
·
fix vercel stream breaking
·
Apr 10, 2024
fix/remove-empty-object-at-start-and-end-of-streaming
fdd4a4ef
·
check vison model to append empty data
·
Apr 05, 2024
lee/add-source-nodes-response
1664be0e
·
docs(changeset): Add nodes to the response and support Vercel streaming format
·
Apr 01, 2024
add-dockers
a21dc86e
·
update typescript start
·
Mar 27, 2024
lee/multi-data-source
61119432
·
update paths for macos
·
Mar 22, 2024
fix/add-line-break-for-shebang
1898d0ea
·
test: remove shebang
·
Mar 20, 2024
refactor/use-llamaindex-edge
7626ca77
·
refactor: import from llamaindex edge for typescript templates and vectordbs
·
Mar 18, 2024
Prev
1
2
3
4
5
Next