This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Jan 27, 2025
-
-
Jason authored
-
- Jan 16, 2025
-
-
Timothy Carambat authored
* Support historical message image inputs/attachments for n+1 queries * patch gemini * OpenRouter vision support cleanup * xai vision history support * Mistral logging --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Dec 16, 2024
-
-
Timothy Carambat authored
* WIP performance metric tracking * fix: patch UI trying to .toFixed() null metric Anthropic tracking migraiton cleanup logs * Apipie implmentation, not tested * Cleanup Anthropic notes, Add support for AzureOpenAI tracking * bedrock token metric tracking * Cohere support * feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting add deepseek support * feat: Add FireworksAI tracking reporting fix: improve handler when usage:null is reported (why?) * Add token reporting for GenericOpenAI * token reporting for koboldcpp + lmstudio * lint * support Groq token tracking * HF token tracking * token tracking for togetherai * LiteLLM token tracking * linting + Mitral token tracking support * XAI token metric reporting * native provider runner * LocalAI token tracking * Novita token tracking * OpenRouter token tracking * Apipie stream metrics * textwebgenui token tracking * perplexity token reporting * ollama token reporting * lint * put back comment * Rip out LC ollama wrapper and use official library * patch images with new ollama lib * improve ollama offline message * fix image handling in ollama llm provider * lint * NVIDIA NIM token tracking * update openai compatbility responses * UI/UX show/hide metrics on click for user preference * update bedrock client --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Nov 04, 2024
-
-
Timothy Carambat authored
* feat: add new model provider: Novita AI * feat: finished novita AI * fix: code lint * remove unneeded logging * add back log for novita stream not self closing * Clarify ENV vars for LLM/embedder seperation for future Patch ENV check for workspace/agent provider --------- Co-authored-by:
Jason <ggbbddjm@gmail.com> Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Aug 15, 2024
-
-
Timothy Carambat authored
* Enable agent context windows to be accurate per provider:model * Refactor model mapping to external file Add token count to document length instead of char-count refernce promptWindowLimit from AIProvider in central location * remove unused imports
-
- Aug 02, 2024
-
-
Timothy Carambat authored
-
- Jul 31, 2024
-
-
Timothy Carambat authored
* Add multimodality support * Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal * temp dev build * patch bad import * noscrolls for windows dnd * noscrolls for windows dnd * update README * update README * add multimodal check
-
- Jul 29, 2024
-
-
Timothy Carambat authored
-
- Jul 22, 2024
-
-
Timothy Carambat authored
-
- Jun 28, 2024
-
-
Timothy Carambat authored
Add type defs to helpers
-
- May 22, 2024
-
-
timothycarambat authored
-
- May 17, 2024
-
-
Timothy Carambat authored
-
- May 01, 2024
-
-
Sean Hatfield authored
* remove sendChat and streamChat functions/references in all LLM providers * remove unused imports --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 30, 2024
-
-
Timothy Carambat authored
* Bump `openai` package to latest Tested all except localai * bump LocalAI support with latest image * add deprecation notice * linting
-
- Apr 26, 2024
-
-
Timothy Carambat authored
* Strengthen field validations on user Updates * update writables
-
timothycarambat authored
-
- Apr 23, 2024
-
-
Timothy Carambat authored
* patch agent invocation rule * Add dynamic model cache from OpenRouter API for context length and available models
-
- Mar 12, 2024
-
-
Timothy Carambat authored
* Stop generation button during stream-response * add custom stop icon * add stop to thread chats
-
- Feb 24, 2024
-
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-
- Feb 07, 2024
-
-
Timothy Carambat authored
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* move internal functions to private in class simplify lc message convertor * Fix hanging Context text when none is present
-
- Dec 04, 2023
-
-
Timothy Carambat authored
* Add API key option to LocalAI * add api key for model dropdown selector
-
- Nov 14, 2023
-
-
Timothy Carambat authored
* feature: add LocalAI as llm provider * update Onboarding/mgmt settings Grab models from models endpoint for localai merge with master * update streaming for complete chunk streaming update localAI LLM to be able to stream * force schema on URL --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> Co-authored-by:
tlandenberger <tobiaslandenberger@gmail.com>
-
- Nov 13, 2023
-
-
Timothy Carambat authored
* assume default model where appropriate * merge with master and fix other model refs
-
Timothy Carambat authored
* [Draft] Enable chat streaming for LLMs * stream only, move sendChat to deprecated * Update TODO deprecation comments update console output color for streaming disabled
-
- Nov 09, 2023
-
-
Francisco Bischoff authored
* Using OpenAI API locally * Infinite prompt input and compression implementation (#332) * WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup * disable import on hosted instances (#339) * disable import on hosted instances * Update UI on disabled import/export --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> * Add support for gpt-4-turbo 128K model (#340) resolves #336 Add support for gpt-4-turbo 128K model * 315 show citations based on relevancy score (#316) * settings for similarity score threshold and prisma schema updated * prisma schema migration for adding similarityScore setting * WIP * Min score default change * added similarityThreshold checking for all vectordb providers * linting --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com> * rename localai to lmstudio * forgot files that were renamed * normalize model interface * add model and context window limits * update LMStudio tagline * Fully working LMStudio integration --------- Co-authored-by:
Francisco Bischoff <984592+franzbischoff@users.noreply.github.com> Co-authored-by:
Timothy Carambat <rambat1010@gmail.com> Co-authored-by:
Sean Hatfield <seanhatfield5@gmail.com>
-