Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. Jan 04, 2024
  2. Dec 19, 2023
  3. Dec 07, 2023
    • Timothy Carambat's avatar
      [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413) · 655ebd94
      Timothy Carambat authored
      * Implement use of native embedder (all-Mini-L6-v2)
      stop showing prisma queries during dev
      
      * Add native embedder as an available embedder selection
      
      * wrap model loader in try/catch
      
      * print progress on download
      
      * add built-in LLM support (expiermental)
      
      * Update to progress output for embedder
      
      * move embedder selection options to component
      
      * saftey checks for modelfile
      
      * update ref
      
      * Hide selection when on hosted subdomain
      
      * update documentation
      hide localLlama when on hosted
      
      * saftey checks for storage of models
      
      * update dockerfile to pre-build Llama.cpp bindings
      
      * update lockfile
      
      * add langchain doc comment
      
      * remove extraneous --no-metal option
      
      * Show data handling for private LLM
      
      * persist model in memory for N+1 chats
      
      * update import
      update dev comment on token model size
      
      * update primary README
      
      * chore: more readme updates and remove screenshots - too much to maintain, just use the app!
      
      * remove screeshot link
      655ebd94
  4. Nov 20, 2023
  5. Oct 25, 2023
  6. Aug 12, 2023
  7. Jul 27, 2023
  8. Jun 14, 2023
  9. Jun 04, 2023
Loading