Skip to content
Snippets Groups Projects
This project is mirrored from https://github.com/Mintplex-Labs/anything-llm. Pull mirroring updated .
  1. May 08, 2024
  2. May 02, 2024
  3. Apr 23, 2024
  4. Apr 19, 2024
  5. Apr 16, 2024
  6. Apr 06, 2024
  7. Apr 05, 2024
  8. Apr 04, 2024
  9. Mar 29, 2024
  10. Mar 22, 2024
  11. Mar 14, 2024
  12. Mar 06, 2024
  13. Feb 24, 2024
    • Sean Hatfield's avatar
      [FEAT] OpenRouter integration (#784) · 633f4252
      Sean Hatfield authored
      
      * WIP openrouter integration
      
      * add OpenRouter options to onboarding flow and data handling
      
      * add todo to fix headers for rankings
      
      * OpenRouter LLM support complete
      
      * Fix hanging response stream with OpenRouter
      update tagline
      update comment
      
      * update timeout comment
      
      * wait for first chunk to start timer
      
      * sort OpenRouter models by organization
      
      * uppercase first letter of organization
      
      * sort grouped models by org
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      633f4252
  14. Feb 22, 2024
  15. Feb 19, 2024
  16. Feb 08, 2024
    • Sean Hatfield's avatar
      [FEAT] Customizable footer icon links in Appearance Settings (#694) · b9855249
      Sean Hatfield authored
      
      * WIP custom footer icons
      
      * UI for updating footer icons complete and backend to save/modify
      
      * add backend for unprotected footer fetch
      
      * break out footer into separate component and render footer items using a cache for 1 hour
      
      * wip review
      
      * refactor & cleanup
      
      * Optimize footer form component
      Optimize caching for footer icons
      Add validation on SystemSetting upserts
      Normalize fallback items for footer_data
      
      * Adjust max icons to 3
      
      * fix success message on remove
      
      * fix success message on remove
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      b9855249
  17. Feb 06, 2024
  18. Jan 26, 2024
    • Hakeem Abbas's avatar
      feature: Integrate Astra as vectorDBProvider (#648) · 5614e2ed
      Hakeem Abbas authored
      
      * feature: Integrate Astra as vectorDBProvider
      
      feature: Integrate Astra as vectorDBProvider
      
      * Update .env.example
      
      * Add env.example to docker example file
      Update spellcheck fo Astra
      Update Astra key for vector selection
      Update order of AstraDB options
      Resize Astra logo image to 330x330
      Update methods of Astra to take in latest vectorDB params like TopN and more
      Update Astra interface to support default methods and avoid crash errors from 404 collections
      Update Astra interface to comply to max chunk insertion limitations
      Update Astra interface to dynamically set dimensionality from chunk 0 size on creation
      
      * reset workspaces
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      5614e2ed
  19. Jan 23, 2024
  20. Jan 18, 2024
  21. Jan 17, 2024
  22. Jan 12, 2024
  23. Jan 10, 2024
    • Sean Hatfield's avatar
      add Together AI LLM support (#560) · 1d39b8a2
      Sean Hatfield authored
      
      * add Together AI LLM support
      
      * update readme to support together ai
      
      * Patch togetherAI implementation
      
      * add model sorting/option labels by organization for model selection
      
      * linting + add data handling for TogetherAI
      
      * change truthy statement
      patch validLLMSelection method
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      1d39b8a2
  24. Dec 28, 2023
  25. Dec 11, 2023
  26. Dec 08, 2023
  27. Dec 07, 2023
    • Timothy Carambat's avatar
      [Feature] AnythingLLM use locally hosted Llama.cpp and GGUF files for inferencing (#413) · 655ebd94
      Timothy Carambat authored
      * Implement use of native embedder (all-Mini-L6-v2)
      stop showing prisma queries during dev
      
      * Add native embedder as an available embedder selection
      
      * wrap model loader in try/catch
      
      * print progress on download
      
      * add built-in LLM support (expiermental)
      
      * Update to progress output for embedder
      
      * move embedder selection options to component
      
      * saftey checks for modelfile
      
      * update ref
      
      * Hide selection when on hosted subdomain
      
      * update documentation
      hide localLlama when on hosted
      
      * saftey checks for storage of models
      
      * update dockerfile to pre-build Llama.cpp bindings
      
      * update lockfile
      
      * add langchain doc comment
      
      * remove extraneous --no-metal option
      
      * Show data handling for private LLM
      
      * persist model in memory for N+1 chats
      
      * update import
      update dev comment on token model size
      
      * update primary README
      
      * chore: more readme updates and remove screenshots - too much to maintain, just use the app!
      
      * remove screeshot link
      655ebd94
    • timothycarambat's avatar
      chore: remove unused NO_DEBUG env · fecfb0fa
      timothycarambat authored
      fecfb0fa
  28. Dec 06, 2023
    • Timothy Carambat's avatar
      Add built-in embedding engine into AnythingLLM (#411) · 88cdd8c8
      Timothy Carambat authored
      * Implement use of native embedder (all-Mini-L6-v2)
      stop showing prisma queries during dev
      
      * Add native embedder as an available embedder selection
      
      * wrap model loader in try/catch
      
      * print progress on download
      
      * Update to progress output for embedder
      
      * move embedder selection options to component
      
      * forgot import
      
      * add Data privacy alert updates for local embedder
      88cdd8c8
  29. Dec 04, 2023
  30. Nov 16, 2023
  31. Nov 14, 2023
  32. Nov 09, 2023
    • Francisco Bischoff's avatar
      Using OpenAI API locally (#335) · f499f1ba
      Francisco Bischoff authored
      
      * Using OpenAI API locally
      
      * Infinite prompt input and compression implementation (#332)
      
      * WIP on continuous prompt window summary
      
      * wip
      
      * Move chat out of VDB
      simplify chat interface
      normalize LLM model interface
      have compression abstraction
      Cleanup compressor
      TODO: Anthropic stuff
      
      * Implement compression for Anythropic
      Fix lancedb sources
      
      * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
      
      * Resolve Weaviate citation sources not working with schema
      
      * comment cleanup
      
      * disable import on hosted instances (#339)
      
      * disable import on hosted instances
      
      * Update UI on disabled import/export
      
      ---------
      
      Co-authored-by: default avatartimothycarambat <rambat1010@gmail.com>
      
      * Add support for gpt-4-turbo 128K model (#340)
      
      resolves #336
      Add support for gpt-4-turbo 128K model
      
      * 315 show citations based on relevancy score (#316)
      
      * settings for similarity score threshold and prisma schema updated
      
      * prisma schema migration for adding similarityScore setting
      
      * WIP
      
      * Min score default change
      
      * added similarityThreshold checking for all vectordb providers
      
      * linting
      
      ---------
      
      Co-authored-by: default avatarshatfield4 <seanhatfield5@gmail.com>
      
      * rename localai to lmstudio
      
      * forgot files that were renamed
      
      * normalize model interface
      
      * add model and context window limits
      
      * update LMStudio tagline
      
      * Fully working LMStudio integration
      
      ---------
      Co-authored-by: default avatarFrancisco Bischoff <984592+franzbischoff@users.noreply.github.com>
      Co-authored-by: default avatarTimothy Carambat <rambat1010@gmail.com>
      Co-authored-by: default avatarSean Hatfield <seanhatfield5@gmail.com>
      f499f1ba
  33. Nov 06, 2023
    • Timothy Carambat's avatar
      Infinite prompt input and compression implementation (#332) · be9d8b03
      Timothy Carambat authored
      * WIP on continuous prompt window summary
      
      * wip
      
      * Move chat out of VDB
      simplify chat interface
      normalize LLM model interface
      have compression abstraction
      Cleanup compressor
      TODO: Anthropic stuff
      
      * Implement compression for Anythropic
      Fix lancedb sources
      
      * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources
      
      * Resolve Weaviate citation sources not working with schema
      
      * comment cleanup
      be9d8b03
Loading