Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. Jan 27, 2025
  2. Jan 16, 2025
  3. Dec 16, 2024
    • Timothy Carambat's avatar
      LLM performance metric tracking (#2825) · dd7c4675
      Timothy Carambat authored
      
      * WIP performance metric tracking
      
      * fix: patch UI trying to .toFixed() null metric
      Anthropic tracking migraiton
      cleanup logs
      
      * Apipie implmentation, not tested
      
      * Cleanup Anthropic notes, Add support for AzureOpenAI tracking
      
      * bedrock token metric tracking
      
      * Cohere support
      
      * feat: improve default stream handler to track for provider who are actually OpenAI compliant in usage reporting
      add deepseek support
      
      * feat: Add FireworksAI tracking reporting
      fix: improve handler when usage:null is reported (why?)
      
      * Add token reporting for GenericOpenAI
      
      * token reporting for koboldcpp + lmstudio
      
      * lint
      
      * support Groq token tracking
      
      * HF token tracking
      
      * token tracking for togetherai
      
      * LiteLLM token tracking
      
      * linting + Mitral token tracking support
      
      * XAI token metric reporting
      
      * native provider runner
      
      * LocalAI token tracking
      
      * Novita token tracking
      
      * OpenRouter token tracking
      
      * Apipie stream metrics
      
      * textwebgenui token tracking
      
      * perplexity token reporting
      
      * ollama token reporting
      
      * lint
      
      * put back comment
      
      * Rip out LC ollama wrapper and use official library
      
      * patch images with new ollama lib
      
      * improve ollama offline message
      
      * fix image handling in ollama llm provider
      
      * lint
      
      * NVIDIA NIM token tracking
      
      * update openai compatbility responses
      
      * UI/UX show/hide metrics on click for user preference
      
      * update bedrock client
      
      ---------
      
      Co-authored-by: default avatarshatfield4 <seanhatfield5@gmail.com>
      Unverified
      dd7c4675
  4. Nov 04, 2024
  5. Aug 15, 2024
  6. Aug 02, 2024
  7. Jul 31, 2024
    • Timothy Carambat's avatar
      Add multimodality support (#2001) · 38fc1812
      Timothy Carambat authored
      * Add multimodality support
      
      * Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal
      
      * temp dev build
      
      * patch bad import
      
      * noscrolls for windows dnd
      
      * noscrolls for windows dnd
      
      * update README
      
      * update README
      
      * add multimodal check
      Unverified
      38fc1812
  8. Jul 29, 2024
  9. Jul 22, 2024
  10. Jun 28, 2024
  11. May 22, 2024
  12. May 17, 2024
  13. May 01, 2024
  14. Apr 30, 2024
  15. Apr 26, 2024
  16. Apr 23, 2024
  17. Mar 12, 2024
  18. Feb 24, 2024
    • Sean Hatfield's avatar
      [FEAT] OpenRouter integration (#784) · 633f4252
      Sean Hatfield authored
      
      * WIP openrouter integration
      
      * add OpenRouter options to onboarding flow and data handling
      
      * add todo to fix headers for rankings
      
      * OpenRouter LLM support complete
      
      * Fix hanging response stream with OpenRouter
      update tagline
      update comment
      
      * update timeout comment
      
      * wait for first chunk to start timer
      
      * sort OpenRouter models by organization
      
      * uppercase first letter of organization
      
      * sort grouped models by org
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      633f4252
  19. Feb 14, 2024
  20. Feb 07, 2024
  21. Jan 17, 2024
    • Sean Hatfield's avatar
      add support for mistral api (#610) · c2c8fe97
      Sean Hatfield authored
      
      * add support for mistral api
      
      * update docs to show support for Mistral
      
      * add default temp to all providers, suggest different results per provider
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      c2c8fe97
    • Sean Hatfield's avatar
      Per workspace model selection (#582) · 90df3758
      Sean Hatfield authored
      
      * WIP model selection per workspace (migrations and openai saves properly
      
      * revert OpenAiOption
      
      * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
      
      * remove unneeded comments
      
      * update logic for when LLMProvider is reset, reset Ai provider files with master
      
      * remove frontend/api reset of workspace chat and move logic to updateENV
      add postUpdate callbacks to envs
      
      * set preferred model for chat on class instantiation
      
      * remove extra param
      
      * linting
      
      * remove unused var
      
      * refactor chat model selection on workspace
      
      * linting
      
      * add fallback for base path to localai models
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      90df3758
  22. Jan 10, 2024
  23. Dec 28, 2023
  24. Dec 04, 2023
  25. Nov 14, 2023
  26. Nov 13, 2023
  27. Nov 09, 2023
    • Francisco Bischoff's avatar
      Using OpenAI API locally (#335) · f499f1ba
      Francisco Bischoff authored
      
      * Using OpenAI API locally
      
      * Infinite prompt input and compression implementation (#332)
      
      * WIP on continuous prompt window summary
      
      * wip
      
      * Move chat out of VDB
      simplify chat interface
      normalize LLM model interface
      have compression abstraction
      Cleanup compressor
      TODO: Anthropic stuff
      
      * Implement compression for Anythropic
      Fix lancedb sources
      
      * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
      
      * Resolve Weaviate citation sources not working with schema
      
      * comment cleanup
      
      * disable import on hosted instances (#339)
      
      * disable import on hosted instances
      
      * Update UI on disabled import/export
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      
      * Add support for gpt-4-turbo 128K model (#340)
      
      resolves #336
      Add support for gpt-4-turbo 128K model
      
      * 315 show citations based on relevancy score (#316)
      
      * settings for similarity score threshold and prisma schema updated
      
      * prisma schema migration for adding similarityScore setting
      
      * WIP
      
      * Min score default change
      
      * added similarityThreshold checking for all vectordb providers
      
      * linting
      
      ---------
      
      Co-authored-by: default avatarshatfield4 <seanhatfield5@gmail.com>
      
      * rename localai to lmstudio
      
      * forgot files that were renamed
      
      * normalize model interface
      
      * add model and context window limits
      
      * update LMStudio tagline
      
      * Fully working LMStudio integration
      
      ---------
      Co-authored-by: default avatarFrancisco Bischoff <984592+franzbischoff@users.noreply.github.com>
      Co-authored-by: default avatarTimothy Carambat <rambat1010@gmail.com>
      Co-authored-by: default avatarSean Hatfield <seanhatfield5@gmail.com>
      Unverified
      f499f1ba
Loading