This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 25, 2025
-
-
Skanda Kaashyap authored
* add claude 3-7 sonnet * made all the changes everywhere * add 3-7-sonnet-latest model * lint --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Jan 27, 2025
-
-
Sean Hatfield authored
* remove native llm * remove node-llama-cpp from dockerfile * remove unneeded items from dockerfile --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Dec 12, 2024
-
-
timothycarambat authored
-
- Dec 11, 2024
-
-
timothycarambat authored
connect #2788
-
- Dec 05, 2024
-
-
Timothy Carambat authored
* Add Support for NVIDIA NIM * update README * linting
-
- Nov 21, 2024
-
-
timothycarambat authored
resolves #2657
-
- Nov 20, 2024
-
-
Timothy Carambat authored
-
- Nov 06, 2024
-
-
timothycarambat authored
-
- Nov 04, 2024
-
-
Timothy Carambat authored
* feat: add new model provider: Novita AI * feat: finished novita AI * fix: code lint * remove unneeded logging * add back log for novita stream not self closing * Clarify ENV vars for LLM/embedder seperation for future Patch ENV check for workspace/agent provider --------- Co-authored-by:
Jason <ggbbddjm@gmail.com> Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Oct 21, 2024
-
-
Timothy Carambat authored
* Add Grok/XAI support for LLM & agents * forgot files
-
- Sep 16, 2024
-
-
Timothy Carambat authored
* Issue #1943: Add support for LLM provider - Fireworks AI * Update UI selection boxes Update base AI keys for future embedder support if needed Add agent capabilites for FireworksAI * class only return --------- Co-authored-by:
Aaron Van Doren <vandoren96+1@gmail.com>
-
- Sep 11, 2024
-
-
Timothy Carambat authored
Add Gemini models resolves #2263
-
- Aug 02, 2024
-
-
RahSwe authored
-
- Jul 26, 2024
-
- Jul 24, 2024
-
-
Timothy Carambat authored
* Add support for Groq /models endpoint * linting
-
- Jul 23, 2024
-
-
Timothy Carambat authored
add AWS bedrock support for LLM + agents
-
- Jul 03, 2024
-
-
Timothy Carambat authored
* enable support for generic openAI as Agent provider
-
- Jun 20, 2024
-
-
Sean Hatfield authored
add support for claude sonnet 3.5 model
-
- May 29, 2024
-
-
Sean Hatfield authored
* fix project names with special characters for github repo data connector * linting
-
Sean Hatfield authored
support for gemini-1.0-pro model and fixes to prompt window limit
-
- May 23, 2024
-
-
Sean Hatfield authored
* add support for gemini-1.5-flash-latest * update comment in gemini LLM provider
-
- May 08, 2024
-
-
Sean Hatfield authored
* add text gen web ui LLM provider support * update README * README typo * update TextWebUI display name patch workspace<>model support for provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 02, 2024
-
-
Sean Hatfield authored
* getChatCompletion working WIP streaming * WIP * working streaming WIP abort stream * implement cohere embedder support * remove inputType option from cohere embedder * fix cohere LLM from not aborting stream when canceled by user * Patch Cohere implemention * add cohere to onboarding --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 30, 2024
-
-
Timothy Carambat authored
* Bump `openai` package to latest Tested all except localai * bump LocalAI support with latest image * add deprecation notice * linting
-
- Apr 23, 2024
-
-
Timothy Carambat authored
* patch agent invocation rule * Add dynamic model cache from OpenRouter API for context length and available models
-
- Apr 22, 2024
-
-
Sean Hatfield authored
add support for more groq models
-
- Apr 19, 2024
-
-
Timothy Carambat authored
* Add support for Gemini-1.5 Pro bump @google/generative-ai pkg Toggle apiVersion if beta model selected resolves #1109 * update API messages due to package change
-
- Apr 17, 2024
-
-
Timothy Carambat authored
* Add Anthropic agent support with new API and tool_calling * patch useProviderHook to unset default models on provider change
-
- Apr 16, 2024
-
-
Timothy Carambat authored
* Enable dynamic GPT model dropdown
-
- Apr 05, 2024
-
-
Timothy Carambat authored
* Enable per-workspace provider/model combination * cleanup * remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model * add space --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Mar 14, 2024
-
-
Sean Hatfield authored
add Haiku model support
-
- Mar 06, 2024
-
-
Sean Hatfield authored
* implement new version of anthropic sdk and support new models * remove handleAnthropicStream and move to handleStream inside anthropic provider * update useGetProvidersModels for new anthropic models
-
Sean Hatfield authored
* Groq LLM support complete * update useGetProvidersModels for groq models * Add definiations update comments and error log reports add example envs --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 14, 2024
-
-
Sean Hatfield authored
* WIP new settings layout * add suggested messages to general & appearance and clean up/make more user friendly * lazy load workspace settings pages * css fix on X button for document picker where button is barely clickable * remove additional workspace settings page * fix thread selection action when on thread * refactor inputs into sub-components remove unused paths --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 26, 2024
-
-
Sean Hatfield authored
* add gpt-4-turbo-preview * add gpt-4-turbo-preview to valid models
-
- Jan 22, 2024
-
-
Sean Hatfield authored
* add gpt-3.5-turbo-1106 model for openai LLM * add gpt-3.5-turbo-1106 as valid model for backend and per workspace model selection
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-