-
Timothy Carambat authored
* Add SearchApi to web browsing * UI modifications for SearchAPI --------- Co-authored-by:
Sebastjan Prachovskij <sebastjan.prachovskij@gmail.com>
Timothy Carambat authored* Add SearchApi to web browsing * UI modifications for SearchAPI --------- Co-authored-by:
Sebastjan Prachovskij <sebastjan.prachovskij@gmail.com>
How to use Dockerized Anything LLM
Use the Dockerized version of AnythingLLM for a much faster and complete startup of AnythingLLM.
Minimum Requirements
Tip
Running AnythingLLM on AWS/GCP/Azure? You should aim for at least 2GB of RAM. Disk storage is proportional to however much data you will be storing (documents, vectors, models, etc). Minimum 10GB recommended.
-
docker
installed on your machine -
yarn
andnode
on your machine - access to an LLM running locally or remotely
*AnythingLLM by default uses a built-in vector database powered by LanceDB
*AnythingLLM by default embeds text on instance privately Learn More
Recommend way to run dockerized AnythingLLM!
Important
If you are running another service on localhost like Chroma, LocalAi, or LMStudio
you will need to use http://host.docker.internal:xxxx to access the service from within
the docker container using AnythingLLM as localhost:xxxx
will not resolve for the host system.
Requires Docker v18.03+ on Win/Mac and 20.10+ on Linux/Ubuntu for host.docker.internal to resolve!
Linux: add --add-host=host.docker.internal:host-gateway
to docker run command for this to resolve.
eg: Chroma host URL running on localhost:8000 on host machine needs to be http://host.docker.internal:8000 when used in AnythingLLM.
Tip
It is best to mount the containers storage volume to a folder on your host machine so that you can pull in future updates without deleting your existing data!