From 32382f9dc13d2c458c969468d4e30aae9a2eed0b Mon Sep 17 00:00:00 2001
From: Brian McBrayer <BrianMcBrayer@users.noreply.github.com>
Date: Tue, 17 Oct 2023 12:34:01 -0400
Subject: [PATCH] Update installation.md - fix typo (#8157)

Very small change - just changes a grammar issue
---
 docs/getting_started/installation.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/getting_started/installation.md b/docs/getting_started/installation.md
index 887c640a7e..33c9b4c523 100644
--- a/docs/getting_started/installation.md
+++ b/docs/getting_started/installation.md
@@ -31,7 +31,7 @@ need additional environment keys + tokens setup depending on the LLM provider.
 
 ## Local Environment Setup
 
-If you don't wish to use OpenAI, the environment will automatically fallback to using `LlamaCPP` and `llama2-chat-13B` for text generation and `BAAI/bge-small-en` for retrieval and embeddings. This models will all run locally.
+If you don't wish to use OpenAI, the environment will automatically fallback to using `LlamaCPP` and `llama2-chat-13B` for text generation and `BAAI/bge-small-en` for retrieval and embeddings. These models will all run locally.
 
 In order to use `LlamaCPP`, follow the installation guide [here](/examples/llm/llama_2_llama_cpp.ipynb). You'll need to install the `llama-cpp-python` package, preferably compiled to support your GPU. This will use aronund 11.5GB of memory across the CPU and GPU.
 
-- 
GitLab