Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. Apr 23, 2024
  2. Mar 12, 2024
  3. Feb 24, 2024
    • Sean Hatfield's avatar
      [FEAT] OpenRouter integration (#784) · 633f4252
      Sean Hatfield authored
      
      * WIP openrouter integration
      
      * add OpenRouter options to onboarding flow and data handling
      
      * add todo to fix headers for rankings
      
      * OpenRouter LLM support complete
      
      * Fix hanging response stream with OpenRouter
      update tagline
      update comment
      
      * update timeout comment
      
      * wait for first chunk to start timer
      
      * sort OpenRouter models by organization
      
      * uppercase first letter of organization
      
      * sort grouped models by org
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      633f4252
  4. Feb 14, 2024
    • Timothy Carambat's avatar
      Refactor LLM chat backend (#717) · c59ab9da
      Timothy Carambat authored
      * refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon
      
      * no thread in sync chat since only api uses it
      adjust import locations
      c59ab9da
  5. Feb 07, 2024
  6. Jan 17, 2024
    • Sean Hatfield's avatar
      add support for mistral api (#610) · c2c8fe97
      Sean Hatfield authored
      
      * add support for mistral api
      
      * update docs to show support for Mistral
      
      * add default temp to all providers, suggest different results per provider
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      c2c8fe97
    • Sean Hatfield's avatar
      Per workspace model selection (#582) · 90df3758
      Sean Hatfield authored
      
      * WIP model selection per workspace (migrations and openai saves properly
      
      * revert OpenAiOption
      
      * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
      
      * remove unneeded comments
      
      * update logic for when LLMProvider is reset, reset Ai provider files with master
      
      * remove frontend/api reset of workspace chat and move logic to updateENV
      add postUpdate callbacks to envs
      
      * set preferred model for chat on class instantiation
      
      * remove extra param
      
      * linting
      
      * remove unused var
      
      * refactor chat model selection on workspace
      
      * linting
      
      * add fallback for base path to localai models
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      90df3758
  7. Jan 10, 2024
    • Sean Hatfield's avatar
      add Together AI LLM support (#560) · 1d39b8a2
      Sean Hatfield authored
      
      * add Together AI LLM support
      
      * update readme to support together ai
      
      * Patch togetherAI implementation
      
      * add model sorting/option labels by organization for model selection
      
      * linting + add data handling for TogetherAI
      
      * change truthy statement
      patch validLLMSelection method
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      1d39b8a2
  8. Dec 28, 2023
    • Timothy Carambat's avatar
      Llm chore cleanup (#501) · 6d5968bf
      Timothy Carambat authored
      * move internal functions to private in class
      simplify lc message convertor
      
      * Fix hanging Context text when none is present
      6d5968bf
  9. Dec 04, 2023
  10. Nov 14, 2023
  11. Nov 13, 2023
  12. Nov 09, 2023
    • Francisco Bischoff's avatar
      Using OpenAI API locally (#335) · f499f1ba
      Francisco Bischoff authored
      
      * Using OpenAI API locally
      
      * Infinite prompt input and compression implementation (#332)
      
      * WIP on continuous prompt window summary
      
      * wip
      
      * Move chat out of VDB
      simplify chat interface
      normalize LLM model interface
      have compression abstraction
      Cleanup compressor
      TODO: Anthropic stuff
      
      * Implement compression for Anythropic
      Fix lancedb sources
      
      * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
      
      * Resolve Weaviate citation sources not working with schema
      
      * comment cleanup
      
      * disable import on hosted instances (#339)
      
      * disable import on hosted instances
      
      * Update UI on disabled import/export
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      
      * Add support for gpt-4-turbo 128K model (#340)
      
      resolves #336
      Add support for gpt-4-turbo 128K model
      
      * 315 show citations based on relevancy score (#316)
      
      * settings for similarity score threshold and prisma schema updated
      
      * prisma schema migration for adding similarityScore setting
      
      * WIP
      
      * Min score default change
      
      * added similarityThreshold checking for all vectordb providers
      
      * linting
      
      ---------
      
      Co-authored-by: default avatarshatfield4 <seanhatfield5@gmail.com>
      
      * rename localai to lmstudio
      
      * forgot files that were renamed
      
      * normalize model interface
      
      * add model and context window limits
      
      * update LMStudio tagline
      
      * Fully working LMStudio integration
      
      ---------
      Co-authored-by: default avatarFrancisco Bischoff <984592+franzbischoff@users.noreply.github.com>
      Co-authored-by: default avatarTimothy Carambat <rambat1010@gmail.com>
      Co-authored-by: default avatarSean Hatfield <seanhatfield5@gmail.com>
      f499f1ba
Loading