This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- May 23, 2024
-
-
Sean Hatfield authored
* add support for gemini-1.5-flash-latest * update comment in gemini LLM provider
-
- May 20, 2024
-
-
Timothy Carambat authored
* Allow setting of safety thresholds for Gemini * linting
-
- May 17, 2024
-
-
Timothy Carambat authored
-
- May 11, 2024
-
-
Sean Hatfield authored
validate messages schema for gemini provider
-
- May 01, 2024
-
-
Sean Hatfield authored
* remove sendChat and streamChat functions/references in all LLM providers * remove unused imports --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 19, 2024
-
-
Timothy Carambat authored
* Add support for Gemini-1.5 Pro bump @google/generative-ai pkg Toggle apiVersion if beta model selected resolves #1109 * update API messages due to package change
-
- Mar 27, 2024
-
-
Timothy Carambat authored
-
- Mar 12, 2024
-
-
Timothy Carambat authored
* Stop generation button during stream-response * add custom stop icon * add stop to thread chats
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-
- Feb 07, 2024
-
-
Timothy Carambat authored
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* move internal functions to private in class simplify lc message convertor * Fix hanging Context text when none is present
-
Timothy Carambat authored
resolves #489
-