This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 03, 2025
-
-
timothycarambat authored
-
- Jan 31, 2025
-
-
Timothy Carambat authored
* Add tokenizer improvments via Singleton class linting * dev build * Estimation fallback when string exceeds a fixed byte size * Add notice to tiktoken on backend
-
- Jan 30, 2025
-
-
Timothy Carambat authored
-
Timothy Carambat authored
-
- Jan 28, 2025
-
-
timothycarambat authored
-
- Jan 27, 2025
-
-
Sean Hatfield authored
* remove native llm * remove node-llama-cpp from dockerfile * remove unneeded items from dockerfile --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
Jason authored
-
- Jan 24, 2025
-
-
Timothy Carambat authored
-
Sean Hatfield authored
* implement dynamic fetching of togetherai models * implement caching for togetherai models * update gitignore for togetherai model caching * Remove models.json from git tracking * Remove .cached_at from git tracking * lint * revert unneeded change --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
timothycarambat authored
-
Sean Hatfield authored
* bump perplexity models --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-
- Jan 16, 2025
-
-
Timothy Carambat authored
* Support historical message image inputs/attachments for n+1 queries * patch gemini * OpenRouter vision support cleanup * xai vision history support * Mistral logging --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Jan 13, 2025
-
-
Timothy Carambat authored
rename file typo
-
- Dec 29, 2024
-
-
timothycarambat authored
-
- Dec 18, 2024
-
-
Timothy Carambat authored
-
- Dec 17, 2024
-
-
Timothy Carambat authored
* Add support for gemini authenticated models endpoint add customModels entry add un-authed fallback to default listing separate models by expiermental status resolves #2866 * add back improved logic for apiVersion decision making
-
- Dec 16, 2024
-
-
Timothy Carambat authored
* WIP performance metric tracking * fix: patch UI trying to .toFixed() null metric Anthropic tracking migraiton cleanup logs * Apipie implmentation, not tested * Cleanup Anthropic notes, Add support for AzureOpenAI tracking * bedrock token metric tracking * Cohere support * feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting add deepseek support * feat: Add FireworksAI tracking reporting fix: improve handler when usage:null is reported (why?) * Add token reporting for GenericOpenAI * token reporting for koboldcpp + lmstudio * lint * support Groq token tracking * HF token tracking * token tracking for togetherai * LiteLLM token tracking * linting + Mitral token tracking support * XAI token metric reporting * native provider runner * LocalAI token tracking * Novita token tracking * OpenRouter token tracking * Apipie stream metrics * textwebgenui token tracking * perplexity token reporting * ollama token reporting * lint * put back comment * Rip out LC ollama wrapper and use official library * patch images with new ollama lib * improve ollama offline message * fix image handling in ollama llm provider * lint * NVIDIA NIM token tracking * update openai compatbility responses * UI/UX show/hide metrics on click for user preference * update bedrock client --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
wolfganghuse authored
* added attachments to genericopenai prompt * add devnote --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 13, 2024
-
-
Sean Hatfield authored
* fix apipie streaming/sort by chat models * lint * linting --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 12, 2024
-
-
timothycarambat authored
-
- Dec 11, 2024
-
-
timothycarambat authored
connect #2788
-
Timothy Carambat authored
-
- Dec 05, 2024
-
-
timothycarambat authored
-
Timothy Carambat authored
* Add Support for NVIDIA NIM * update README * linting
-
- Nov 22, 2024
-
-
timothycarambat authored
-
- Nov 21, 2024
-
-
timothycarambat authored
resolves #2657
-
Sean Hatfield authored
* togetherai llama 3.2 vision models support * remove console log * fix listing to reflect what is on the chart --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 20, 2024
-
-
Timothy Carambat authored
-
timothycarambat authored
-
- Nov 18, 2024
-
-
Sean Hatfield authored
* bump together ai models * Run post-bump command --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 13, 2024
-
-
Sean Hatfield authored
* patch bad models endpoint path in lm studio embedding engine * convert to OpenAI wrapper compatibility * add URL force parser/validation for LMStudio connections * remove comment --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 06, 2024
-
-
timothycarambat authored
-
- Nov 04, 2024
-
-
Timothy Carambat authored
* feat: add new model provider: Novita AI * feat: finished novita AI * fix: code lint * remove unneeded logging * add back log for novita stream not self closing * Clarify ENV vars for LLM/embedder seperation for future Patch ENV check for workspace/agent provider --------- Co-authored-by:
Jason <ggbbddjm@gmail.com> Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Oct 29, 2024
-
-
Timothy Carambat authored
-
- Oct 21, 2024
-
-
Timothy Carambat authored
* Add Grok/XAI support for LLM & agents * forgot files
-
Timothy Carambat authored
Adds support for only the llama3.2 vision models on groq. This comes with many conditionals and nuances to handle as Groqs vision implemention is quite bad right now
-
- Oct 18, 2024
-
-
Timothy Carambat authored
-
- Oct 16, 2024
-
-
Timothy Carambat authored
-
Sean Hatfield authored
* support openai o1 models * Prevent O1 use for agents getter for isO1Model; --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Oct 15, 2024
-
-
Sean Hatfield authored
* support generic openai workspace model * Update UI for free form input for some providers --------- Co-authored-by:
Timothy Carambat <rambat1010@gmail.com>
-