This project is mirrored from https://github.com/Mintplex-Labs/anything-llm.
Pull mirroring updated .
- May 08, 2024
-
-
timothycarambat authored
-
timothycarambat authored
-
timothycarambat authored
-
- May 06, 2024
-
-
ShadowArcanist authored
[
] Added new LLMs to supported LLMs list on README - Added KoboldCPP to supported LLMs list - Added Cohere to supported LLMs list - Added Generic OpenAI to supported LLMs list - Added Cohere to supported Embedding models list
-
- May 02, 2024
-
-
timothycarambat authored
-
- Apr 22, 2024
-
-
timothycarambat authored
-
- Apr 19, 2024
-
-
Timothy Carambat authored
* Add LMStudio embedding endpoint support * update alive path check for HEAD remove commented JSX * update comment
-
timothycarambat authored
-
- Apr 15, 2024
-
-
ShadowArcanist authored
Updated Readme
-
- Apr 05, 2024
-
-
timothycarambat authored
-
- Apr 02, 2024
-
-
timothycarambat authored
-
- Feb 27, 2024
-
-
Timothy Carambat authored
* Add Ollama embedder model support calls * update docs
-
- Feb 24, 2024
-
-
Sean Hatfield authored
* WIP openrouter integration * add OpenRouter options to onboarding flow and data handling * add todo to fix headers for rankings * OpenRouter LLM support complete * Fix hanging response stream with OpenRouter update tagline update comment * update timeout comment * wait for first chunk to start timer * sort OpenRouter models by organization * uppercase first letter of organization * sort grouped models by org --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 22, 2024
-
-
Sean Hatfield authored
* add LLM support for perplexity * update README & example env * fix ENV keys in example env files * slight changes for QA of perplexity support * Update Perplexity AI name --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Feb 21, 2024
-
-
Timothy Carambat authored
* railway deploy button * railway deploy button
-
- Feb 07, 2024
-
-
timothycarambat authored
-
timothycarambat authored
-
- Feb 05, 2024
-
-
Sean Hatfield authored
* WIP embedded app * WIP got response from backend in embedded app * WIP streaming prints to embedded app * implemented streaming and tailwind min for styling into embedded app * WIP embedded app history functional * load params from script tag into embedded app * rough in modularization of embed chat cleanup dev process for easier dev support move all chat to components todo: build process todo: backend support * remove eslint config * Implement models and cleanup embed chat endpoints Improve build process for embed prod minification and bundle size awareness WIP * forgot files * rename to embed folder * introduce chat modal styles * add middleware validations on embed chat * auto open param and default greeting * reset chat history * Admin embed config page * Admin Embed Chats mgmt page * update embed * nonpriv * more style support reopen if chat was last opened * update comments * remove unused imports * allow change of workspace for embedconfig * update failure to lookup message * update reset script * update instructions * Add more styling options Add sponsor text at bottom Support dynamic container height Loading animations * publish new embed script * Add back syntax highlighting and keep bundle small via dynamic script build * add hint * update readme * update copy model for snippet with link to styles --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 30, 2024
-
-
Alex Leventer authored
Update README.md
-
- Jan 23, 2024
-
-
Timothy Carambat authored
* Add bare metal support docs and deployment * typos
-
- Jan 18, 2024
-
-
Timothy Carambat authored
* feat: Add support for Zilliz Cloud by Milvus * update placeholder text update data handling stmt * update zilliz descriptor
-
- Jan 17, 2024
-
-
Sean Hatfield authored
* add support for mistral api * update docs to show support for Mistral * add default temp to all providers, suggest different results per provider --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 12, 2024
-
-
Shuyoou authored
* issue #543 support milvus vector db * migrate Milvus to use MilvusClient instead of ORM normalize env setup for docs/implementation feat: embedder model dimension added * update comments --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 10, 2024
-
-
Sean Hatfield authored
* add Together AI LLM support * update readme to support together ai * Patch togetherAI implementation * add model sorting/option labels by organization for model selection * linting + add data handling for TogetherAI * change truthy statement patch validLLMSelection method --------- Co-authored-by:
timothycarambat <rambat1010@gmail.com>
-
- Jan 05, 2024
-
-
Bhargav Kowshik authored
-
- Jan 04, 2024
-
-
timothycarambat authored
-
- Jan 03, 2024
-
-
timothycarambat authored
-
- Dec 28, 2023
-
-
Timothy Carambat authored
* Add support for Ollama as LLM provider resolves #493
-
Timothy Carambat authored
resolves #489
-
- Dec 19, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* wip * btn links * button updates * update dev instructions * typo
-
- Dec 17, 2023
-
-
timothycarambat authored
-
Ikko Eltociear Ashimine authored
proprotional -> proportional
-
- Dec 14, 2023
-
-
Timothy Carambat authored
* wip: init refactor of document processor to JS * add NodeJs PDF support * wip: partity with python processor feat: add pptx support * fix: forgot files * Remove python scripts totally * wip:update docker to boot new collector * add package.json support * update dockerfile for new build * update gitignore and linting * add more protections on file lookup * update package.json * test build * update docker commands to use cap-add=SYS_ADMIN so web scraper can run update all scripts to reflect this remove docker build for branch
-
- Dec 13, 2023
-
-
Timothy Carambat authored
add Docker internal URL
-
- Dec 11, 2023
-
-
Ikko Eltociear Ashimine authored
standlone -> standalone
-
- Dec 07, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* Implement use of native embedder (all-Mini-L6-v2) stop showing prisma queries during dev * Add native embedder as an available embedder selection * wrap model loader in try/catch * print progress on download * add built-in LLM support (expiermental) * Update to progress output for embedder * move embedder selection options to component * saftey checks for modelfile * update ref * Hide selection when on hosted subdomain * update documentation hide localLlama when on hosted * saftey checks for storage of models * update dockerfile to pre-build Llama.cpp bindings * update lockfile * add langchain doc comment * remove extraneous --no-metal option * Show data handling for private LLM * persist model in memory for N+1 chats * update import update dev comment on token model size * update primary README * chore: more readme updates and remove screenshots - too much to maintain, just use the app! * remove screeshot link
-
- Dec 06, 2023
-
-
timothycarambat authored
-
Timothy Carambat authored
* update docker build instructions * cleanup
-