This project is mirrored from https://github.com/run-llama/LlamaIndexTS.
Pull mirroring updated .
- Nov 30, 2023
-
-
Marcus Schiesser authored
-
- Nov 29, 2023
-
-
yisding authored
Unify apps/simple and examples - move everything to examples
-
Marcus Schiesser authored
-
- Nov 28, 2023
-
-
yisding authored
Update link to python docs
-
yisding authored
Minor fixes in PGVectorStore
-
yisding authored
Feat: add GPT4 Vision support (and file upload) to create-llama
-
Laurie Voss authored
-
- Nov 27, 2023
-
-
Michael Tutty authored
-
- Nov 24, 2023
-
-
Marcus Schiesser authored
-
Marcus Schiesser authored
fix: set maxTokens to 4096 so vision model is not stopping too early (seems to have a lower default than other models)
-
Marcus Schiesser authored
-
Alex Yang authored
-
- Nov 23, 2023
-
-
yisding authored
-
yisding authored
Several fixes for improving compatibility with Next.JS
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
- Nov 22, 2023
-
-
yisding authored
-
yisding authored
Add AssemblyAI integration
-
yisding authored
-
yisding authored
-
yisding authored
Add PGVectorStore
-
yisding authored
-
yisding authored
support for claude-2.1
-
yisding authored
-
yisding authored
-
yisding authored
feat: add clip embedding to llamaindex
-
yisding authored
Added custom RAG prompt for Claude. Supporting system message format.
-
yisding authored
add .env instructions
-
- Nov 21, 2023
-
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-
Marcus Schiesser authored
-