This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 03, 2025
-
-
timothycarambat authored
-
- Jan 28, 2025
-
-
timothycarambat authored
-
- Jan 27, 2025
-
-
Sean Hatfield authored
* remove native llm * remove node-llama-cpp from dockerfile * remove unneeded items from dockerfile --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Jan 24, 2025
-
-
Sean Hatfield authored
* implement dynamic fetching of togetherai models * implement caching for togetherai models * update gitignore for togetherai model caching * Remove models.json from git tracking * Remove .cached_at from git tracking * lint * revert unneeded change --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Dec 17, 2024
-
-
Timothy Carambat authored
* Add support for gemini authenticated models endpoint add customModels entry add un-authed fallback to default listing separate models by expiermental status resolves #2866 * add back improved logic for apiVersion decision making
-
- Dec 13, 2024
-
-
Sean Hatfield authored
* fix apipie streaming/sort by chat models * lint * linting --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 11, 2024
-
-
Timothy Carambat authored
-
- Dec 05, 2024
-
-
Timothy Carambat authored
* Add Support for NVIDIA NIM * update README * linting
-
- Nov 13, 2024
-
-
Sean Hatfield authored
* patch bad models endpoint path in lm studio embedding engine * convert to OpenAI wrapper compatibility * add URL force parser/validation for LMStudio connections * remove comment --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 04, 2024
-
-
Timothy Carambat authored
* feat: add new model provider: Novita AI * feat: finished novita AI * fix: code lint * remove unneeded logging * add back log for novita stream not self closing * Clarify ENV vars for LLM/embedder seperation for future Patch ENV check for workspace/agent provider --------- Co-authored-by:
Jason <ggbbddjm@gmail.com> Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Oct 21, 2024
-
-
Timothy Carambat authored
* Add Grok/XAI support for LLM & agents * forgot files
-
- Oct 16, 2024
-
-
Sean Hatfield authored
* support openai o1 models * Prevent O1 use for agents getter for isO1Model; --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Oct 15, 2024
-
-
Timothy Carambat authored
resolves #2464 resolves #989 Note: Streaming not supported
-
- Sep 26, 2024
-
-
Sean Hatfield authored
* add deepseek support * lint * update deepseek context length * add deepseek to onboarding --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Sep 16, 2024
-
-
Timothy Carambat authored
* Issue #1943: Add support for LLM provider - Fireworks AI * Update UI selection boxes Update base AI keys for future embedder support if needed Add agent capabilites for FireworksAI * class only return --------- Co-authored-by:
Aaron Van Doren <vandoren96+1@gmail.com>
-
- Jul 24, 2024
-
-
Timothy Carambat authored
* Add support for Groq /models endpoint * linting
-
- May 16, 2024
-
-
Sean Hatfield authored
* litellm LLM provider support * fix lint error * change import orders fix issue with model retrieval --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- May 14, 2024
-
-
Timothy Carambat authored
* Add Speech-to-text and Text-to-speech providers * add files and update comment * update comments * patch: bad playerRef check
-
- May 13, 2024
-
-
Timothy Carambat authored
-
- May 08, 2024
-
-
Sean Hatfield authored
* add LMStudio agent support (generic) support "work" with non-tool callable LLMs, highly dependent on system specs * add comments * enable few-shot prompting per function for OSS models * Add Agent support for Ollama models * azure, groq, koboldcpp agent support complete + WIP togetherai * WIP gemini agent support * WIP gemini blocked and will not fix for now * azure fix * merge fix * add localai agent support * azure untooled agent support * merge fix * refactor implementation of several agent provideers * update bad merge comment --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 02, 2024
-
-
Sean Hatfield authored
* koboldcpp LLM support * update .env.examples for koboldcpp support * update LLM preference order update koboldcpp comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 30, 2024
-
-
Timothy Carambat authored
* Bump `openai` package to latest Tested all except localai * bump LocalAI support with latest image * add deprecation notice * linting
-
- Apr 26, 2024
-
-
timothycarambat authored
-
- Apr 23, 2024
-
-
Timothy Carambat authored
* patch agent invocation rule * Add dynamic model cache from OpenRouter API for context length and available models
-
- Apr 16, 2024
-
-
Timothy Carambat authored
* Enable dynamic GPT model dropdown
-
- Mar 22, 2024
-
-
Timothy Carambat authored
-
- Feb 24, 2024
-
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* Add support for Ollama as LLM provider resolves #493
-
- Dec 16, 2023
-
-
Timothy Carambat authored
-
- Dec 11, 2023
-
-
Timothy Carambat authored
connect #417
-
- Dec 07, 2023
-
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link
-
- Dec 04, 2023
-
-
Timothy Carambat authored
* Add API key option to LocalAI * add api key for model dropdown selector
-
- Nov 14, 2023
-
-
Timothy Carambat authored
* feature: add LocalAI as llm provider * update Onboarding/mgmt settings Grab models from models endpoint for localai merge with master * update streaming for complete chunk streaming update localAI LLM to be able to stream * force schema on URL --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> Co-authored-by:
tlandenberger <tobiaslandenberger@gmail.com>
-
- Oct 31, 2023
-
-
Timothy Carambat authored
* Implement retrieval and use of fine-tune models Cleanup LLM selection code resolves #311 * Cleanup from PR bot
-