diff --git a/docs/getting_started/concepts.md b/docs/getting_started/concepts.md
index 1c9fd8d5bb5d6836822800a6c8262622ddc1d09d..efbf158af7dee20cd87b9c2319a204e96dfb042c 100644
--- a/docs/getting_started/concepts.md
+++ b/docs/getting_started/concepts.md
@@ -40,7 +40,7 @@ Once you've ingested your data, LlamaIndex will help you index the data into a f
 ### Querying Stage
 
 In the querying stage, the RAG pipeline retrieves the most relevant context given a user query,
-and pass that to the LLM (along with the query) to synthesize a response.
+and passes that to the LLM (along with the query) to synthesize a response.
 This gives the LLM up-to-date knowledge that is not in its original training data,
 (also reducing hallucination).
 The key challenge in the querying stage is retrieval, orchestration, and reasoning over (potentially many) knowledge bases.