Skip to content
Snippets Groups Projects
Unverified Commit 32382f9d authored by Brian McBrayer's avatar Brian McBrayer Committed by GitHub
Browse files

Update installation.md - fix typo (#8157)

Very small change - just changes a grammar issue
parent 4b59695c
No related branches found
No related tags found
No related merge requests found
...@@ -31,7 +31,7 @@ need additional environment keys + tokens setup depending on the LLM provider. ...@@ -31,7 +31,7 @@ need additional environment keys + tokens setup depending on the LLM provider.
## Local Environment Setup ## Local Environment Setup
If you don't wish to use OpenAI, the environment will automatically fallback to using `LlamaCPP` and `llama2-chat-13B` for text generation and `BAAI/bge-small-en` for retrieval and embeddings. This models will all run locally. If you don't wish to use OpenAI, the environment will automatically fallback to using `LlamaCPP` and `llama2-chat-13B` for text generation and `BAAI/bge-small-en` for retrieval and embeddings. These models will all run locally.
In order to use `LlamaCPP`, follow the installation guide [here](/examples/llm/llama_2_llama_cpp.ipynb). You'll need to install the `llama-cpp-python` package, preferably compiled to support your GPU. This will use aronund 11.5GB of memory across the CPU and GPU. In order to use `LlamaCPP`, follow the installation guide [here](/examples/llm/llama_2_llama_cpp.ipynb). You'll need to install the `llama-cpp-python` package, preferably compiled to support your GPU. This will use aronund 11.5GB of memory across the CPU and GPU.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment