This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- Feb 22, 2024
-
-
timothycarambat authored
-
Sean Hatfield authored
[DOCS] Update Docker documentation to show how to setup Ollama with Dockerized version of AnythingLLM (#774) * update HOW_TO_USE_DOCKER to help with Ollama setup using docker * update HOW_TO_USE_DOCKER * styles update * create separate README for ollama and link to it in HOW_TO_USE_DOCKER * styling update
-
- Feb 19, 2024
-
-
Timothy Carambat authored
* Update Docker Run command * Update Docker Run command * Update Docker Run command
-
- Feb 12, 2024
-
-
timothycarambat authored
-
- Feb 06, 2024
-
-
Timothy Carambat authored
-
- Jan 29, 2024
-
-
Sean Hatfield authored
* add support for new openai models * QOL changes/improve logic for adding new openai embedding models * add example file inputs for Openai embedding ENV selection; * Fix if stmt conditional --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 26, 2024
-
-
Hakeem Abbas authored
* feature: Integrate Astra as vectorDBProvider feature: Integrate Astra as vectorDBProvider * Update .env.example * Add env.example to docker example file Update spellcheck fo Astra Update Astra key for vector selection Update order of AstraDB options Resize Astra logo image to 330x330 Update methods of Astra to take in latest vectorDB params like TopN and more Update Astra interface to support default methods and avoid crash errors from 404 collections Update Astra interface to comply to max chunk insertion limitations Update Astra interface to dynamically set dimensionality from chunk 0 size on creation * reset workspaces --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 23, 2024
-
-
Sean Hatfield authored
* migrate pinecone package to latest version and migrate pinecone vectordb provider class * remove pinecone environment name env variable and update docs to reflect removal & serverless support complete * migrate query for pinecone db * typo in log --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 18, 2024
-
-
Timothy Carambat authored
* feat: Add support for Zilliz Cloud by Milvus * update placeholder text update data handling stmt * update zilliz descriptor
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 12, 2024
-
-
Shuyoou authored
* issue #543 support milvus vector db * migrate Milvus to use MilvusClient instead of ORM normalize env setup for docs/implementation feat: embedder model dimension added * update comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 09, 2024
-
-
Timothy Carambat authored
* Update build process to support multi-platform builds Bump @lancedb/vectordb to 0.1.19 for ARM&AMD compatibility Patch puppeteer on ARM builds because of broken chromium resolves #539 resolves #548 --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com>
-
- Jan 06, 2024
-
-
timothycarambat authored
-
- Jan 05, 2024
-
-
pritchey authored
* Added support for HTTPS to server. * Move boot scripts to helper file catch bad ssl boot config fallback SSL boot to HTTP --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* Prevent external service localhost question * add 0.0.0.0 to docker-invalid URL * clarify hint
-
Timothy Carambat authored
* Add support for Ollama as LLM provider resolves #493
-
Timothy Carambat authored
resolves #489
-
- Dec 20, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* wip * side by side test * patch syntax highlighting * remove spacing formatting * swap powershell command
-
timothycarambat authored
update file picker spacing for attributes
-
- Dec 19, 2023
-
-
Timothy Carambat authored
* wip * btn links * button updates * update dev instructions * typo
-
- Dec 17, 2023
-
-
timothycarambat authored
-
- Dec 16, 2023
-
-
lunamidori5 authored
-
- Dec 14, 2023
-
-
Timothy Carambat authored
* wip: init refactor of document processor to JS * add NodeJs PDF support * wip: partity with python processor feat: add pptx support * fix: forgot files * Remove python scripts totally * wip:update docker to boot new collector * add package.json support * update dockerfile for new build * update gitignore and linting * add more protections on file lookup * update package.json * test build * update docker commands to use cap-add=SYS_ADMIN so web scraper can run update all scripts to reflect this remove docker build for branch
-
- Dec 13, 2023
-
-
Timothy Carambat authored
add Docker internal URL
-
- Dec 08, 2023
-
-
Timothy Carambat authored
fix: cleanup code for embedding length clarify resolves #388
-
- Dec 07, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link
-
timothycarambat authored
-
- Dec 06, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* update docker build instructions * cleanup
-
- Dec 05, 2023
-
-
pritchey authored
* Added improved password complexity checking capability. * Move password complexity checker as User.util dynamically import required libraries depending on code execution flow lint * Ensure persistence of password requirements on restarts via env-dump Copy example schema to docker env as well --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Dec 04, 2023
-
-
Timothy Carambat authored
* Add API key option to LocalAI * add api key for model dropdown selector
-
- Nov 18, 2023
-
-
timothycarambat authored
-
timothycarambat authored
-
- Nov 17, 2023
-
-
Sean Hatfield authored
* WIP adding url uploads to document picker * fix manual script for uploading url to custom-documents * fix metadata for url scraping * wip url parsing * update how async link scraping works * docker-compose defaults added no autocomplete on URLs --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Nov 14, 2023
-
-
Tobias Landenberger authored
* feature: add localAi as embedding provider * chore: add LocalAI image * chore: add localai embedding examples to docker .env.example * update setting env pull models from localai API * update comments on embedder Dont show cost estimation on UI --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
Timothy Carambat authored
* feature: add LocalAI as llm provider * update Onboarding/mgmt settings Grab models from models endpoint for localai merge with master * update streaming for complete chunk streaming update localAI LLM to be able to stream * force schema on URL --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> Co-authored-by:
tlandenberger <tobiaslandenberger@gmail.com>
-
- Nov 09, 2023
-
-
Francisco Bischoff authored
* Using OpenAI API locally * Infinite prompt input and compression implementation (#332) * WIP on continuous prompt window summary * wip * Move chat out of VDB simplify chat interface normalize LLM model interface have compression abstraction Cleanup compressor TODO: Anthropic stuff * Implement compression for Anythropic Fix lancedb sources * cleanup vectorDBs and check that lance, chroma, and pinecone are returning valid metadata sources * Resolve Weaviate citation sources not working with schema * comment cleanup * disable import on hosted instances (#339) * disable import on hosted instances * Update UI on disabled import/export --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com> * Add support for gpt-4-turbo 128K model (#340) resolves #336 Add support for gpt-4-turbo 128K model * 315 show citations based on relevancy score (#316) * settings for similarity score threshold and prisma schema updated * prisma schema migration for adding similarityScore setting * WIP * Min score default change * added similarityThreshold checking for all vectordb providers * linting --------- Co-authored-by:
shatfield4 <seanhatfield5@gmail.com> * rename localai to lmstudio * forgot files that were renamed * normalize model interface * add model and context window limits * update LMStudio tagline * Fully working LMStudio integration --------- Co-authored-by:
Francisco Bischoff <984592+franzbischoff@users.noreply.github.com> Co-authored-by:
Timothy Carambat <rambat1010@gmail.com> Co-authored-by:
Sean Hatfield <seanhatfield5@gmail.com>
-