This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 24, 2024
-
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* Add support for Ollama as LLM provider resolves #493
-
- Dec 16, 2023
-
-
Timothy Carambat authored
-
- Dec 11, 2023
-
-
Timothy Carambat authored
connect #417
-
- Dec 07, 2023
-
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link
-
- Dec 04, 2023
-
-
Timothy Carambat authored
* Add API key option to LocalAI * add api key for model dropdown selector
-
- Nov 14, 2023
-
-
Timothy Carambat authored
* feature: add LocalAI as llm provider * update Onboarding/mgmt settings Grab models from models endpoint for localai merge with master * update streaming for complete chunk streaming update localAI LLM to be able to stream * force schema on URL --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> Co-authored-by:
tlandenberger <tobiaslandenberger@gmail.com>
-
- Oct 31, 2023
-
-
Timothy Carambat authored
* Implement retrieval and use of fine-tune models Cleanup LLM selection code resolves #311 * Cleanup from PR bot
-