This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- May 23, 2024
-
-
Timothy Carambat authored
* Improve RAG responses via source backfilling * Hide irrelevant citations from UI
-
- Feb 21, 2024
-
-
Timothy Carambat authored
* Enable ability to do full-text query on documents Show alert modal on first pin for client Add ability to use pins in stream/chat/embed * typo and copy update * simplify spread of context and sources
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-
- Jan 05, 2024
-
-
timothycarambat authored
resolves #541
-
- Jan 04, 2024
-
-
Timothy Carambat authored
* Handle special token in TikToken resolves #525 * remove duplicate method add clarification comment on implementation
-
- Nov 06, 2023
-
-
Timothy Carambat authored
* WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup
-