Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. Dec 05, 2024
  2. Nov 13, 2024
  3. Oct 18, 2024
  4. Aug 15, 2024
    • Timothy Carambat's avatar
      Agent Context window + context window refactor. (#2126) · 99f2c25b
      Timothy Carambat authored
      * Enable agent context windows to be accurate per provider:model
      
      * Refactor model mapping to external file
      Add token count to document length instead of char-count
      refernce promptWindowLimit from AIProvider in central location
      
      * remove unused imports
      99f2c25b
  5. Jul 31, 2024
    • Timothy Carambat's avatar
      Add multimodality support (#2001) · 38fc1812
      Timothy Carambat authored
      * Add multimodality support
      
      * Add Bedrock, KoboldCpp,LocalAI,and TextWebGenUI multi-modal
      
      * temp dev build
      
      * patch bad import
      
      * noscrolls for windows dnd
      
      * noscrolls for windows dnd
      
      * update README
      
      * update README
      
      * add multimodal check
      38fc1812
  6. Jun 28, 2024
  7. May 17, 2024
  8. May 01, 2024
  9. Apr 30, 2024
  10. Mar 22, 2024
  11. Feb 14, 2024
    • Timothy Carambat's avatar
      Refactor LLM chat backend (#717) · c59ab9da
      Timothy Carambat authored
      * refactor stream/chat/embed-stram to be a single execution logic path so that it is easier to maintain and build upon
      
      * no thread in sync chat since only api uses it
      adjust import locations
      c59ab9da
  12. Feb 07, 2024
  13. Jan 17, 2024
    • Sean Hatfield's avatar
      add support for mistral api (#610) · c2c8fe97
      Sean Hatfield authored
      
      * add support for mistral api
      
      * update docs to show support for Mistral
      
      * add default temp to all providers, suggest different results per provider
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      c2c8fe97
    • Sean Hatfield's avatar
      Per workspace model selection (#582) · 90df3758
      Sean Hatfield authored
      
      * WIP model selection per workspace (migrations and openai saves properly
      
      * revert OpenAiOption
      
      * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
      
      * remove unneeded comments
      
      * update logic for when LLMProvider is reset, reset Ai provider files with master
      
      * remove frontend/api reset of workspace chat and move logic to updateENV
      add postUpdate callbacks to envs
      
      * set preferred model for chat on class instantiation
      
      * remove extra param
      
      * linting
      
      * remove unused var
      
      * refactor chat model selection on workspace
      
      * linting
      
      * add fallback for base path to localai models
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      90df3758
  14. Dec 28, 2023
    • Timothy Carambat's avatar
      Llm chore cleanup (#501) · 6d5968bf
      Timothy Carambat authored
      * move internal functions to private in class
      simplify lc message convertor
      
      * Fix hanging Context text when none is present
      6d5968bf
  15. Nov 13, 2023
  16. Nov 09, 2023
    • Francisco Bischoff's avatar
      Using OpenAI API locally (#335) · f499f1ba
      Francisco Bischoff authored
      
      * Using OpenAI API locally
      
      * Infinite prompt input and compression implementation (#332)
      
      * WIP on continuous prompt window summary
      
      * wip
      
      * Move chat out of VDB
      simplify chat interface
      normalize LLM model interface
      have compression abstraction
      Cleanup compressor
      TODO: Anthropic stuff
      
      * Implement compression for Anythropic
      Fix lancedb sources
      
      * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
      
      * Resolve Weaviate citation sources not working with schema
      
      * comment cleanup
      
      * disable import on hosted instances (#339)
      
      * disable import on hosted instances
      
      * Update UI on disabled import/export
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      
      * Add support for gpt-4-turbo 128K model (#340)
      
      resolves #336
      Add support for gpt-4-turbo 128K model
      
      * 315 show citations based on relevancy score (#316)
      
      * settings for similarity score threshold and prisma schema updated
      
      * prisma schema migration for adding similarityScore setting
      
      * WIP
      
      * Min score default change
      
      * added similarityThreshold checking for all vectordb providers
      
      * linting
      
      ---------
      
      Co-authored-by: default avatarshatfield4 <seanhatfield5@gmail.com>
      
      * rename localai to lmstudio
      
      * forgot files that were renamed
      
      * normalize model interface
      
      * add model and context window limits
      
      * update LMStudio tagline
      
      * Fully working LMStudio integration
      
      ---------
      Co-authored-by: default avatarFrancisco Bischoff <984592+franzbischoff@users.noreply.github.com>
      Co-authored-by: default avatarTimothy Carambat <rambat1010@gmail.com>
      Co-authored-by: default avatarSean Hatfield <seanhatfield5@gmail.com>
      f499f1ba
Loading