This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- May 13, 2024
-
-
timothycarambat authored
Make LanceDB the vector database default provider in backend to prevent issues where somehow this key is not set by the user resulting in a Pinecone error even though they never said they wanted Pinecone to be their vector db
-
- May 08, 2024
-
-
Sean Hatfield authored
* add text gen web ui LLM provider support * update README * README typo * update TextWebUI display name patch workspace<>model support for provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 02, 2024
-
-
Sean Hatfield authored
* koboldcpp LLM support * update .env.examples for koboldcpp support * update LLM preference order update koboldcpp comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* getChatCompletion working WIP streaming * WIP * working streaming WIP abort stream * implement cohere embedder support * remove inputType option from cohere embedder * fix cohere LLM from not aborting stream when canceled by user * Patch Cohere implemention * add cohere to onboarding --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 23, 2024
-
-
Timothy Carambat authored
* Add generic OpenAI endpoint support * allow any input for model in case provider does not support models endpoint
-
- Apr 19, 2024
-
-
Timothy Carambat authored
* Add LMStudio embedding endpoint support * update alive path check for HEAD remove commented JSX * update comment
-
- Apr 06, 2024
-
-
Timothy Carambat authored
* Embedder download - fallback URL * improve logging for native embedder
-
- Apr 05, 2024
-
-
Timothy Carambat authored
* Enable per-workspace provider/model combination * cleanup * remove resetWorkspaceChatModels and wipeWorkspaceModelPreference to prevent workspace from resetting model * add space --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Mar 06, 2024
-
-
Sean Hatfield authored
* Groq LLM support complete * update useGetProvidersModels for groq models * Add definiations update comments and error log reports add example envs --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 27, 2024
-
-
Timothy Carambat authored
* Add Ollama embedder model support calls * update docs
-
- Feb 24, 2024
-
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 06, 2024
-
-
Timothy Carambat authored
-
- Jan 26, 2024
-
-
Hakeem Abbas authored
* feature: Integrate Astra as vectorDBProvider feature: Integrate Astra as vectorDBProvider * Update .env.example * Add env.example to docker example file Update spellcheck fo Astra Update Astra key for vector selection Update order of AstraDB options Resize Astra logo image to 330x330 Update methods of Astra to take in latest vectorDB params like TopN and more Update Astra interface to support default methods and avoid crash errors from 404 collections Update Astra interface to comply to max chunk insertion limitations Update Astra interface to dynamically set dimensionality from chunk 0 size on creation * reset workspaces --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 18, 2024
-
-
Timothy Carambat authored
* feat: Add support for Zilliz Cloud by Milvus * update placeholder text update data handling stmt * update zilliz descriptor
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 12, 2024
-
-
Shuyoou authored
* issue #543 support milvus vector db * migrate Milvus to use MilvusClient instead of ORM normalize env setup for docs/implementation feat: embedder model dimension added * update comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* Add support for Ollama as LLM provider resolves #493
-
Timothy Carambat authored
resolves #489
-
- Dec 08, 2023
-
-
Timothy Carambat authored
fix: cleanup code for embedding length clarify resolves #388
-
- Dec 07, 2023
-
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link
-
- Dec 06, 2023
-
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * Update to progress output for embedder * move embedder selection options to component * forgot import * add Data privacy alert updates for local embedder
-
- Nov 16, 2023
-
-
Sean Hatfield authored
* allow use of any embedder for any llm/update data handling modal * Apply embedder override and fallback to OpenAI and Azure models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 14, 2023
-
-
Tobias Landenberger authored
* feature: add localAi as embedding provider * chore: add LocalAI image * chore: add localai embedding examples to docker .env.example * update setting env pull models from localai API * update comments on embedder Dont show cost estimation on UI --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Timothy Carambat authored
* feature: add LocalAI as llm provider * update Onboarding/mgmt settings Grab models from models endpoint for localai merge with master * update streaming for complete chunk streaming update localAI LLM to be able to stream * force schema on URL --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> Co-authored-by:
tlandenberger <tobiaslandenberger@gmail.com>
-
- Nov 09, 2023
-
-
Francisco Bischoff authored
* Using OpenAI API locally * Infinite prompt input and compression implementation (#332) * WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup * disable import on hosted instances (#339) * disable import on hosted instances * Update UI on disabled import/export --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> * Add support for gpt-4-turbo 128K model (#340) resolves #336 Add support for gpt-4-turbo 128K model * 315 show citations based on relevancy score (#316) * settings for similarity score threshold and prisma schema updated * prisma schema migration for adding similarityScore setting * WIP * Min score default change * added similarityThreshold checking for all vectordb providers * linting --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com> * rename localai to lmstudio * forgot files that were renamed * normalize model interface * add model and context window limits * update LMStudio tagline * Fully working LMStudio integration --------- Co-authored-by:
Francisco Bischoff <984592+franzbischoff@users.noreply.github.com> Co-authored-by:
Timothy Carambat <rambat1010@gmail.com> Co-authored-by:
Sean Hatfield <seanhatfield5@gmail.com>
-
- Oct 30, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* WIP Anythropic support for chat, chat and query w/context * Add onboarding support for Anthropic * cleanup * fix Anthropic answer parsing move embedding selector to general util
-
- Aug 15, 2023
-
-
Timothy Carambat authored
* Add Qdrant support for embedding, chat, and conversation * Change comments
-
- Aug 09, 2023
-
-
Timothy Carambat authored
-
- Aug 04, 2023
-
-
Timothy Carambat authored
* Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Replace keys with LLM Selection in settings modal Enforce checks for new ENVs depending on LLM selection
-
- Jul 28, 2023
-
-
Timothy Carambat authored
* Move OpenAI api calls into its own interface/Class move curate sources to be specific for each vectorDBs response for chat/query * remove comment
-
Timothy Carambat authored
improve citations to show all text chunks referred and expand the citation to view full referenced text (#161) * improve citations to show all text chunks referred and expand the citation to view full referenced text chunk text of same document together * remove debug
-
- Jul 20, 2023
-
-
Timothy Carambat authored
* refactor: convert chunk embedding to one API call * chore: lint * fix chroma for batch and single vectorization of text * Fix LanceDB multi and single vectorization * Fix pinecone for single and multiple embeddings --------- Co-authored-by:
Jonathan Waltz <volcanicislander@gmail.com>
-
- Jun 26, 2023
-
-
Timothy Carambat authored
* Add chat/converstaion mode as the default chat mode Show menu for toggling options for chat/query/reset command Show chat status below input resolves #61 * remove console logs
-
- Jun 09, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* add start of lanceDB support * lancedb initial support * add null method for deletion of documents from namespace since LanceDB does not support show warning modal on frontend for this * update .env.example and lancedb methods for sourcing * change export method * update readme
-
- Jun 08, 2023
-
-
timothycarambat authored
-