This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Oct 16, 2024
-
-
Sean Hatfield authored
* support openai o1 models * Prevent O1 use for agents getter for isO1Model; --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Aug 15, 2024
-
-
Timothy Carambat authored
* Enable agent context windows to be accurate per provider:model * Refactor model mapping to external file Add token count to document length instead of char-count refernce promptWindowLimit from AIProvider in central location * remove unused imports
-
- Aug 13, 2024
-
-
PyKen authored
-
- Jul 31, 2024
-
-
Timothy Carambat authored
* Add multimodality support * Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal * temp dev build * patch bad import * noscrolls for windows dnd * noscrolls for windows dnd * update README * update README * add multimodal check
-
- Jun 28, 2024
-
-
Timothy Carambat authored
Add type defs to helpers
-
- May 22, 2024
-
-
timothycarambat authored
-
- May 17, 2024
-
-
Timothy Carambat authored
-
- May 13, 2024
-
-
Timothy Carambat authored
-
- May 01, 2024
-
-
Sean Hatfield authored
* remove sendChat and streamChat functions/references in all LLM providers * remove unused imports --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 30, 2024
-
-
Timothy Carambat authored
* Bump `openai` package to latest Tested all except localai * bump LocalAI support with latest image * add deprecation notice * linting
-
- Apr 16, 2024
-
-
Timothy Carambat authored
* Enable dynamic GPT model dropdown
-
- Feb 21, 2024
-
-
Timothy Carambat authored
* Enable ability to do full-text query on documents Show alert modal on first pin for client Add ability to use pins in stream/chat/embed * typo and copy update * simplify spread of context and sources
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-
- Feb 07, 2024
-
-
Timothy Carambat authored
-
- Jan 26, 2024
-
-
Sean Hatfield authored
* add gpt-4-turbo-preview * add gpt-4-turbo-preview to valid models
-
- Jan 22, 2024
-
-
Sean Hatfield authored
* add gpt-3.5-turbo-1106 model for openai LLM * add gpt-3.5-turbo-1106 as valid model for backend and per workspace model selection
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* WIP model selection per workspace (migrations and openai saves properly * revert OpenAiOption * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi * remove unneeded comments * update logic for when LLMProvider is reset, reset Ai provider files with master * remove frontend/api reset of workspace chat and move logic to updateENV add postUpdate callbacks to envs * set preferred model for chat on class instantiation * remove extra param * linting * remove unused var * refactor chat model selection on workspace * linting * add fallback for base path to localai models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* move internal functions to private in class simplify lc message convertor * Fix hanging Context text when none is present
-
- Nov 16, 2023
-
-
Sean Hatfield authored
* allow use of any embedder for any llm/update data handling modal * Apply embedder override and fallback to OpenAI and Azure models --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 13, 2023
-
-
Timothy Carambat authored
* assume default model where appropriate * merge with master and fix other model refs
-
Timothy Carambat authored
* [Draft] Enable chat streaming for LLMs * stream only, move sendChat to deprecated * Update TODO deprecation comments update console output color for streaming disabled
-
- Nov 06, 2023
-
-
Timothy Carambat authored
resolves #336 Add support for gpt-4-turbo 128K model
-
Timothy Carambat authored
* WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup
-
- Oct 31, 2023
-
-
Timothy Carambat authored
* Implement retrieval and use of fine-tune models Cleanup LLM selection code resolves #311 * Cleanup from PR bot
-
- Oct 30, 2023
-
-
Timothy Carambat authored
* WIP Anythropic support for chat, chat and query w/context * Add onboarding support for Anthropic * cleanup * fix Anthropic answer parsing move embedding selector to general util
-
- Oct 26, 2023
-
-
Timothy Carambat authored
Limit is due to POST body max size. Sufficiently large requests will abort automatically We should report that error back on the frontend during embedding Update vectordb providers to return on failed
-
- Aug 04, 2023
-
-
Timothy Carambat authored
* Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Remove LangchainJS for chat support chaining Implement runtime LLM selection Implement AzureOpenAI Support for LLM + Emebedding WIP on frontend Update env to reflect the new fields * Replace keys with LLM Selection in settings modal Enforce checks for new ENVs depending on LLM selection
-
- Jul 28, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* Move OpenAI api calls into its own interface/Class move curate sources to be specific for each vectorDBs response for chat/query * remove comment
-
- Jun 27, 2023
-
-
Timothy Carambat authored
-
- Jun 15, 2023
-
-
Timothy Carambat authored
* 1. Define LLM Temperature as a workspace setting 2. Implement rudimentry table migration code for both new and existing repos to bring tables up to date 3. Trigger for workspace on update to update timestamp 4. Always fallback temp to 0.7 5. Extract WorkspaceModal into Tabbed content 6. Remove workspace name UNIQUE constraint (cannot be migrated :() 7. Add slug +seed when existing slug is already take 8. Seperate name from slug so display names can be changed * remove blocking test return
-
- Jun 09, 2023
-
-
timothycarambat authored
-
- Jun 08, 2023
-
-
Timothy Carambat authored
-
- Jun 04, 2023
-
-
timothycarambat authored
-