This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- May 22, 2024
-
-
timothycarambat authored
-
- May 20, 2024
-
-
Timothy Carambat authored
* Allow setting of safety thresholds for Gemini * linting
-
- May 18, 2024
-
-
Timothy Carambat authored
-
- May 17, 2024
-
-
Timothy Carambat authored
-
- May 16, 2024
-
-
Sean Hatfield authored
* litellm LLM provider support * fix lint error * change import orders fix issue with model retrieval --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- May 13, 2024
-
-
Timothy Carambat authored
-
Sean Hatfield authored
* add api key support for oobabooga web ui * dont expose API Key for TextWebGenUi --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 11, 2024
-
-
Sean Hatfield authored
validate messages schema for gemini provider
-
- May 10, 2024
-
-
Sean Hatfield authored
* add max tokens field to generic openai llm connector * add max_tokens property to generic openai agent provider
-
- May 08, 2024
-
-
Sean Hatfield authored
* add text gen web ui LLM provider support * update README * README typo * update TextWebUI display name patch workspace<>model support for provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 02, 2024
-
-
Sean Hatfield authored
* koboldcpp LLM support * update .env.examples for koboldcpp support * update LLM preference order update koboldcpp comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
* getChatCompletion working WIP streaming * WIP * working streaming WIP abort stream * implement cohere embedder support * remove inputType option from cohere embedder * fix cohere LLM from not aborting stream when canceled by user * Patch Cohere implemention * add cohere to onboarding --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- May 01, 2024
-
-
Sean Hatfield authored
* remove sendChat and streamChat functions/references in all LLM providers * remove unused imports --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Apr 30, 2024
-
-
Timothy Carambat authored
* Bump `openai` package to latest Tested all except localai * bump LocalAI support with latest image * add deprecation notice * linting
-
Timothy Carambat authored
* bump langchain deps * patch native and ollama providers remove deprecated deps --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Apr 26, 2024
-
-
Timothy Carambat authored
* Strengthen field validations on user Updates * update writables
-
timothycarambat authored
-
- Apr 25, 2024
-
-
timothycarambat authored
resolves #1188
-
- Apr 23, 2024
-
-
Timothy Carambat authored
* Add generic OpenAI endpoint support * allow any input for model in case provider does not support models endpoint
-
Timothy Carambat authored
* patch agent invocation rule * Add dynamic model cache from OpenRouter API for context length and available models
-
- Apr 22, 2024
-
-
Sean Hatfield authored
add support for more groq models
-
- Apr 19, 2024
-
-
Timothy Carambat authored
* Add support for Gemini-1.5 Pro bump @google/generative-ai pkg Toggle apiVersion if beta model selected resolves #1109 * update API messages due to package change
-
- Apr 18, 2024
-
-
timothycarambat authored
resolves #1126
-
- Apr 16, 2024
-
-
Timothy Carambat authored
-
Timothy Carambat authored
* Enable dynamic GPT model dropdown
-
- Apr 14, 2024
-
-
Timothy Carambat authored
-
Timothy Carambat authored
resolves #1096
-
- Apr 02, 2024
-
-
Timothy Carambat authored
-
- Mar 27, 2024
-
-
Timothy Carambat authored
-
- Mar 22, 2024
-
-
Timothy Carambat authored
-
- Mar 14, 2024
-
-
Sean Hatfield authored
add Haiku model support
-
- Mar 12, 2024
-
-
Timothy Carambat authored
* Stop generation button during stream-response * add custom stop icon * add stop to thread chats
-
- Mar 06, 2024
-
-
Sean Hatfield authored
* implement new version of anthropic sdk and support new models * remove handleAnthropicStream and move to handleStream inside anthropic provider * update useGetProvidersModels for new anthropic models
-
Sean Hatfield authored
* Groq LLM support complete * update useGetProvidersModels for groq models * Add definiations update comments and error log reports add example envs --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 24, 2024
-
-
Timothy Carambat authored
bump pplx model support
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Sean Hatfield authored
[DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM (#774) * update HOW_TO_USE_DOCKER to help with Ollama setup using docker * update HOW_TO_USE_DOCKER * styles update * create separate README for ollama and link to it in HOW_TO_USE_DOCKER * styling update
-
- Feb 21, 2024
-
-
Timothy Carambat authored
* Enable ability to do full-text query on documents Show alert modal on first pin for client Add ability to use pins in stream/chat/embed * typo and copy update * simplify spread of context and sources
-
- Feb 14, 2024
-
-
Timothy Carambat authored
* refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon * no thread in sync chat since only api uses it adjust import locations
-