-
github-actions[bot] authored
Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
github-actions[bot] authoredCo-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
To find the state of this project's repository at the time of any of these versions, check out the tags.
CHANGELOG.md 9.16 KiB
create-llama
0.1.26
Patch Changes
- f43399cc: Add metadatafilters to context chat engine (Typescript)
0.1.25
Patch Changes
- c67daeb2: fix: missing set private to false for default generate.py
0.1.24
Patch Changes
- 43474a51: Configure LlamaCloud organization ID for Python
- cf11b233: Add Azure code interpreter for Python and TS
- fd9fb42a: Add Azure OpenAI as model provider
- 5c13646e: Fix starter questions not working in python backend
0.1.23
Patch Changes
- 6bd76fbf: Add template for structured extraction
0.1.22
Patch Changes
0.1.21
Patch Changes
- bd4714ca: Filter private documents for Typescript (Using MetadataFilters) and update to LlamaIndexTS 0.5.7
- 58e6c150: Add using LlamaParse for private file uploader
- 455ab686: Display files in sources using LlamaCloud indexes.
- 23b73571: Use gpt-4o-mini as default model
- 09004136: Add suggestions for next questions.
0.1.20
Patch Changes
- 624c721a: Update to LlamaIndex 0.10.55
0.1.19
Patch Changes
- df96159e: Use Qdrant FastEmbed as local embedding provider
- 32fb32ab: Support upload document files: pdf, docx, txt
0.1.18
Patch Changes
- d1026ea7: support Mistral as llm and embedding
- a221cfc1: Use LlamaParse for all the file types that it supports (if activated)
0.1.17
Patch Changes
- 9ecd0612: Add new template for a multi-agents app
0.1.16
Patch Changes
- a0aab032: Add T-System's LLMHUB as a model provider
0.1.15
Patch Changes
- 64732f05: Fix the issue of images not showing with the sandbox URL from OpenAI's models
- aeb6fef4: use llamacloud for chat
0.1.14
Patch Changes
0.1.13
Patch Changes
- b3c969da: Add image generator tool
0.1.12
Patch Changes
- aa69014d: Fix NextJS for TS 5.2
0.1.11
Patch Changes
- 48b96ff1: Add DuckDuckGo search tool
- 9c9decbb: Reuse function tool instances and improve e2b interpreter tool for Python
- 02ed277d: Add Groq as a model provider
- 0748f2e8: Remove hard-coded Gemini supported models
0.1.10
Patch Changes
- 9112d080: Add OpenAPI tool for Typescript
- 8f03f8d4: Add OLLAMA_REQUEST_TIMEOUT variable to config Ollama timeout (Python)
- 8f03f8d4: Apply nest_asyncio for llama parse
0.1.9
Patch Changes
- a42fa53a: Add CSV upload
- 563b51d7: Fix Vercel streaming (python) to stream data events instantly
- d60b3c5a: Add E2B code interpreter tool for FastAPI
- 956538ee: Add OpenAPI action tool for FastAPI
0.1.8
Patch Changes
- cd50a33d: Add interpreter tool for TS using e2b.dev
0.1.7
Patch Changes
- 260d37a3: Add system prompt env variable for TS
- bbd5b8dd: Fix postgres connection leaking issue
- bb53425b: Support HTTP proxies by setting the GLOBAL_AGENT_HTTP_PROXY env variable
- 69c2e16c: Fix streaming for Express
- 7873bfb0: Update Ollama provider to run with the base URL from the environment variable
0.1.6
Patch Changes
- 56537a14: Display PDF files in source nodes
0.1.5
Patch Changes
- 84db7983: feat: support display latex in chat markdown
0.1.4
Patch Changes
- 0bc8e75c: Use ingestion pipeline for dedicated vector stores (Python only)
- cb1001de: Add ChromaDB vector store
0.1.3
Patch Changes
- 416073db: Directly import vector stores to work with NextJS
0.1.2
Patch Changes
- 056e376e: Add support for displaying tool outputs (including weather widget as example)
0.1.1
Patch Changes
- 7bd3ed55: Support Anthropic and Gemini as model providers
- 7bd3ed55: Support new agents from LITS 0.3
- cfb5257a: Display events (e.g. retrieving nodes) per chat message
0.1.0
Minor Changes
- f1c3e8df: Add Llama3 and Phi3 support using Ollama
Patch Changes
-
a0dec80d: Use
gpt-4-turbo
model as default. Upgrade Python llama-index to 0.10.28 -
753229df: Remove asking for AI models and use defaults instead (OpenAIs GPT-4 Vision Preview and Embeddings v3). Use
--ask-models
CLI parameter to select models. - 1d78202e: Add observability for Python
- 6acccd2b: Use poetry run generate to generate embeddings for FastAPI
- 9efcffe1: Use Settings object for LlamaIndex configuration
- 418bf9ba: refactor: use tsx instead of ts-node
- 1be69a5a: Add Qdrant support
0.0.32
Patch Changes
- 625ed4d6: Support Astra VectorDB
-
922e0ceb: Remove UI question (use shadcn as default). Use
html
UI by calling create-llama with --ui html parameter - ce2f24d7: Update loaders and tools config to yaml format (for Python)
- e8db041d: Let user select multiple datasources (URLs, files and folders)
- c06d4af7: Add nodes to the response (Python)
- 29b17ee3: Allow using agents without any data source
- 665c26cc: Add redirect to documentation page when accessing the base URL (FastAPI)
- 78ded9e2: Add Dockerfile templates for Typescript and Python
- 99e758fc: Merge non-streaming and streaming template to one
- b3f26856: Add support for agent generation for Typescript
- 27397143: Use a database (MySQL or PostgreSQL) as a data source
0.0.31
Patch Changes
- 56faee0b: Added windows e2e tests
- 60ed8fe0: Added missing environment variable config for URL data source
- 60ed8fe0: Fixed tool usage by freezing llama-index package versions
0.0.30
Patch Changes
- 3af63284: Add support for llamaparse using Typescript
- dd92b911: Add fetching llm and embedding models from server
- bac1b43f: Add Milvus vector database
0.0.29
Patch Changes
- edd24c21: Add observability with openllmetry
- 403fc6f3: Minor bug fixes to improve DX (missing .env value and updated error messages)
- 0f797579: Ability to download community submodules
0.0.28
Patch Changes
- 89a49f4: Add more config variables to .env file
- fdf48dd: Add "Start in VSCode" option to postInstallAction
- fdf48dd: Add devcontainers to generated code
0.0.27
Patch Changes
- 2d29350: Add LlamaParse option when selecting a pdf file or a folder (FastAPI only)
- b354f23: Add embedding model option to create-llama (FastAPI only)
0.0.26
Patch Changes
- 09d532e: feat: generate llama pack project from llama index
- cfdd6db: feat: add pinecone support to create llama
- ef25d69: upgrade llama-index package to version v0.10.7 for create-llama app
- 50dfd7b: update fastapi for CVE-2024-24762
0.0.25
Patch Changes
- d06a85b: Add option to create an agent by selecting tools (Google, Wikipedia)
- 7b7329b: Added latest turbo models for GPT-3.5 and GPT 4
0.0.24
Patch Changes
- ba95ca3: Use condense plus context chat engine for FastAPI as default
0.0.23
Patch Changes
- c680af6: Fixed issues with locating templates path
0.0.22
Patch Changes
- 6dd401e: Add an option to provide an URL and chat with the website data (FastAPI only)
- e9b87ef: Select a folder as data source and support more file types (.pdf, .doc, .docx, .xls, .xlsx, .csv)
0.0.20
Patch Changes
- 27d55fd: Add an option to provide an URL and chat with the website data
0.0.19
Patch Changes
- 3a29a80: Add node_modules to gitignore in Express backends
- fe03aaa: feat: generate llama pack example
0.0.18
Patch Changes
- 88d3b41: fix packaging
0.0.17
Patch Changes
- fa17f7e: Add an option that allows the user to run the generated app
- 9e5d8e1: Add an option to select a local PDF file as data source
0.0.16
Patch Changes
- a73942d: Fix: Bundle mongo dependency with NextJS
- 9492cc6: Feat: Added option to automatically install dependencies (for Python and TS)
- f74dea5: Feat: Show images in chat messages using GPT4 Vision (Express and NextJS only)
0.0.15
Patch Changes
- 8e124e5: feat: support showing image on chat message
0.0.14
Patch Changes
- 2e6b36e: fix: re-organize file structure
- 2b356c8: fix: relative path incorrect
0.0.13
Patch Changes
- Added PostgreSQL vector store (for Typescript and Python)
- Improved async handling in FastAPI
0.0.12
Patch Changes
- 9c5e22a: Added cross-env so frontends with Express/FastAPI backends are working under Windows
- 5ab65eb: Bring Python templates with TS templates to feature parity
- 9c5e22a: Added vector DB selector to create-llama (starting with MongoDB support)
0.0.11
Patch Changes
- 2aeb341: - Added option to create a new project based on community templates
- Added OpenAI model selector for NextJS projects
- Added GPT4 Vision support (and file upload)
0.0.10
Patch Changes
- Bugfixes (thanks @marcusschiesser)
0.0.9
Patch Changes
- acfe232: Deployment fixes (thanks @seldo)
0.0.8
Patch Changes
- 8cdb07f: Fix Next deployment (thanks @seldo and @marcusschiesser)
0.0.7
Patch Changes
- 9f9f293: Added more to README and made it easier to switch models (thanks @seldo)