Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. Oct 16, 2024
  2. Oct 15, 2024
  3. Sep 26, 2024
  4. Sep 16, 2024
  5. Jul 24, 2024
  6. May 16, 2024
  7. May 14, 2024
  8. May 13, 2024
  9. May 08, 2024
    • Sean Hatfield's avatar
      Agent support for LLMs with no function calling (#1295) · 8422f925
      Sean Hatfield authored
      
      * add LMStudio agent support (generic) support
      "work" with non-tool callable LLMs, highly dependent on system specs
      
      * add comments
      
      * enable few-shot prompting per function for OSS models
      
      * Add Agent support for Ollama models
      
      * azure, groq, koboldcpp agent support complete + WIP togetherai
      
      * WIP gemini agent support
      
      * WIP gemini blocked and will not fix for now
      
      * azure fix
      
      * merge fix
      
      * add localai agent support
      
      * azure untooled agent support
      
      * merge fix
      
      * refactor implementation of several agent provideers
      
      * update bad merge comment
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      8422f925
  10. May 02, 2024
  11. Apr 30, 2024
  12. Apr 26, 2024
  13. Apr 23, 2024
  14. Apr 16, 2024
  15. Mar 22, 2024
  16. Feb 24, 2024
    • Sean Hatfield's avatar
      [FEAT] OpenRouter integration (#784) · 633f4252
      Sean Hatfield authored
      
      * WIP openrouter integration
      
      * add OpenRouter options to onboarding flow and data handling
      
      * add todo to fix headers for rankings
      
      * OpenRouter LLM support complete
      
      * Fix hanging response stream with OpenRouter
      update tagline
      update comment
      
      * update timeout comment
      
      * wait for first chunk to start timer
      
      * sort OpenRouter models by organization
      
      * uppercase first letter of organization
      
      * sort grouped models by org
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      633f4252
  17. Feb 22, 2024
  18. Jan 17, 2024
    • Sean Hatfield's avatar
      add support for mistral api (#610) · c2c8fe97
      Sean Hatfield authored
      
      * add support for mistral api
      
      * update docs to show support for Mistral
      
      * add default temp to all providers, suggest different results per provider
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      c2c8fe97
    • Sean Hatfield's avatar
      Per workspace model selection (#582) · 90df3758
      Sean Hatfield authored
      
      * WIP model selection per workspace (migrations and openai saves properly
      
      * revert OpenAiOption
      
      * add support for models per workspace for anthropic, localAi, ollama, openAi, and togetherAi
      
      * remove unneeded comments
      
      * update logic for when LLMProvider is reset, reset Ai provider files with master
      
      * remove frontend/api reset of workspace chat and move logic to updateENV
      add postUpdate callbacks to envs
      
      * set preferred model for chat on class instantiation
      
      * remove extra param
      
      * linting
      
      * remove unused var
      
      * refactor chat model selection on workspace
      
      * linting
      
      * add fallback for base path to localai models
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      Unverified
      90df3758
  19. Jan 10, 2024
  20. Dec 28, 2023
  21. Dec 16, 2023
  22. Dec 11, 2023
  23. Dec 07, 2023
    • Timothy Carambat's avatar
      [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413) · 655ebd94
      Timothy Carambat authored
      * Implement use of native embedder (all-Mini-L6-v2)
      stop showing prisma queries during dev
      
      * Add native embedder as an available embedder selection
      
      * wrap model loader in try/catch
      
      * print progress on download
      
      * add built-in LLM support (expiermental)
      
      * Update to progress output for embedder
      
      * move embedder selection options to component
      
      * saftey checks for modelfile
      
      * update ref
      
      * Hide selection when on hosted subdomain
      
      * update documentation
      hide localLlama when on hosted
      
      * saftey checks for storage of models
      
      * update dockerfile to pre-build Llama.cpp bindings
      
      * update lockfile
      
      * add langchain doc comment
      
      * remove extraneous --no-metal option
      
      * Show data handling for private LLM
      
      * persist model in memory for N+1 chats
      
      * update import
      update dev comment on token model size
      
      * update primary README
      
      * chore: more readme updates and remove screenshots - too much to maintain, just use the app!
      
      * remove screeshot link
      Unverified
      655ebd94
  24. Dec 04, 2023
  25. Nov 14, 2023
  26. Oct 31, 2023
Loading