Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
To find the state of this project's repository at the time of any of these versions, check out the tags.
CHANGELOG.md 79.38 KiB

ChangeLog

[0.10.14] - 2024-02-28

New Features

  • Released llama-index-networks (#11413)
  • Jina reranker (#11291)
  • Added DuckDuckGo agent search tool (#11386)
  • helper functions for chatml (#10272)
  • added brave search tool for agents (#11468)
  • Added Friendli LLM integration (#11384)
  • metadata only queries for chromadb (#11328)

Bug Fixes / Nits

  • Fixed inheriting llm callback in synthesizers (#11404)
  • Catch delete error in milvus (#11315)
  • Fixed pinecone kwargs issue (#11422)
  • Supabase metadata filtering fix (#11428)
  • api base fix in gemini embeddings (#11393)
  • fix elasticsearch vector store await (#11438)
  • vllm server cuda fix (#11442)
  • fix for passing LLM to context chat engine (#11444)
  • set input types for cohere embeddings (#11288)
  • default value for azure ad token (#10377)
  • added back prompt mixin for react agent (#10610)
  • fixed system roles for gemini (#11481)
  • fixed mean agg pooling returning numpy float values (#11458)
  • improved json path parsing for JSONQueryEngine (#9097)

[0.10.13] - 2024-02-26

New Features

  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#11311)
  • Added support for mistral-large (#11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#11373)
  • Added fsspec support to SimpleDirectoryReader (#11303)

Bug Fixes / Nits

  • Fixed an issue with context window + prompt helper (#11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#11400)
  • Fixed function calling in fireworks LLM (#11363)
  • Made cohere embedding types more automatic (#11288)
  • Improve function calling in react agent (#11280)
  • Fixed MockLLM imports (#11376)

[0.10.12] - 2024-02-22

New Features

  • Added llama-index-postprocessor-colbert-rerank package (#11057)
  • MyMagicAI LLM (#11263)
  • MariaTalk LLM (#10925)
  • Add retries to github reader (#10980)
  • Added FireworksAI embedding and LLM modules (#10959)

Bug Fixes / Nits

  • Fixed string formatting in weaviate (#11294)
  • Fixed off-by-one error in semantic splitter (#11295)
  • Fixed download_llama_pack for multiple files (#11272)
  • Removed BUILD files from packages (#11267)
  • Loosened python version reqs for all packages (#11267)
  • Fixed args issue with chromadb (#11104)

[0.10.11] - 2024-02-21

Bug Fixes / Nits

  • Fixed multi-modal LLM for async acomplete (#11064)
  • Fixed issue with llamaindex-cli imports (#11068)

[0.10.10] - 2024-02-20

I'm still a bit wonky with our publishing process -- apologies. This is just a version bump to ensure the changes that were supposed to happen in 0.10.9 actually did get published. (AF)

[0.10.9] - 2024-02-20

  • add llama-index-cli dependency

[0.10.7] - 2024-02-19

New Features

  • Added Self-Discover llamapack (#10951)

Bug Fixes / Nits

  • Fixed linting in CICD (#10945)
  • Fixed using remote graph stores (#10971)
  • Added missing LLM kwarg in NoText response synthesizer (#10971)
  • Fixed openai import in rankgpt (#10971)
  • Fixed resolving model name to string in openai embeddings (#10971)
  • Off by one error in sentence window node parser (#10971)

[0.10.6] - 2024-02-17

First, apologies for missing the changelog the last few versions. Trying to figure out the best process with 400+ packages.