This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Oct 15, 2024
-
-
Sean Hatfield authored
* support generic openai workspace model * Update UI for free form input for some providers --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Aug 26, 2024
-
-
Timothy Carambat authored
-
- Aug 15, 2024
-
-
Timothy Carambat authored
* Enable agent context windows to be accurate per provider:model * Refactor model mapping to external file Add token count to document length instead of char-count refernce promptWindowLimit from AIProvider in central location * remove unused imports
-
- Jun 28, 2024
-
-
Timothy Carambat authored
Add type defs to helpers
-
- May 17, 2024
-
-
Timothy Carambat authored
-
- May 01, 2024
-
-
Sean Hatfield authored
* remove sendChat and streamChat functions/references in all LLM providers * remove unused imports --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Mar 12, 2024
-
-
Timothy Carambat authored
* Stop generation button during stream-response * add custom stop icon * add stop to thread chats
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-
- Feb 07, 2024
-
-
Timothy Carambat authored
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 04, 2024
-
-
Timothy Carambat authored
resolves #492
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* move internal functions to private in class simplify lc message convertor * Fix hanging Context text when none is present
-
- Nov 16, 2023
-
-
Sean Hatfield authored
* allow use of any embedder for any llm/update data handling modal * Apply embedder override and fallback to OpenAI and Azure models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 13, 2023
-
-
Timothy Carambat authored
* [Draft] Enable chat streaming for LLMs * stream only, move sendChat to deprecated * Update TODO deprecation comments update console output color for streaming disabled
-
- Nov 06, 2023
-
-
Timothy Carambat authored
* WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup
-
- Oct 30, 2023
-
-
Timothy Carambat authored
* WIP Anythropic support for chat, chat and query w/context * Add onboarding support for Anthropic * cleanup * fix Anthropic answer parsing move embedding selector to general util
-
- Aug 22, 2023
-
-
Timothy Carambat authored
resolves #184
-
- Aug 04, 2023
-
-
Timothy Carambat authored
* Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Replace keys with LLM Selection in settings modal Enforce checks for new ENVs depending on LLM selection
-