diff --git a/docs/use_cases/q_and_a/rag_cli.md b/docs/use_cases/q_and_a/rag_cli.md
index 9a0a0e8404ca425733c3355a0559b4bf191220e5..e7496517fb240c0dfc373d84f0c2f4e5ef81e8e8 100644
--- a/docs/use_cases/q_and_a/rag_cli.md
+++ b/docs/use_cases/q_and_a/rag_cli.md
@@ -2,7 +2,7 @@
 
 One common use case is chatting with an LLM about files you have saved locally on your computer.
 
-We have written a CLI tool do help you do just that! You can point the rag CLI tool to a set of files you've saved locally, and it will ingest those files into a local vector database that is then used for a Chat Q&A repl within your terminal.
+We have written a CLI tool to help you do just that! You can point the rag CLI tool to a set of files you've saved locally, and it will ingest those files into a local vector database that is then used for a Chat Q&A repl within your terminal.
 
 By default, this tool uses OpenAI for the embeddings & LLM as well as a local Chroma Vector DB instance. **Warning**: this means that, by default, the local data you ingest with this tool _will_ be sent to OpenAI's API.