diff --git a/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb b/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
index 4524113a3929886ec8e94103ee84d649f4decba4..073e5a818b24e48185a67314ae6551197c4e983a 100644
--- a/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
+++ b/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
@@ -121,7 +121,7 @@
       "\n",
       "3. Declaration of Research Assessment: In academia, this could refer to a statement or policy regarding how research is evaluated.\n",
       "\n",
-      "4. Digital on-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
+      "4. Digital On-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
       "\n",
       "Please provide more context for a more accurate definition.\n"
      ]
@@ -371,7 +371,7 @@
    "source": [
     "## In Summary\n",
     "\n",
-    "- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain specific, updated data, long-tail)\n",
+    "- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain-specific, updated data, long-tail)\n",
     "- Context augmentation has been shown (in a few studies) to outperform LLMs without augmentation\n",
     "- In this notebook, we showed one such example that follows that pattern."
    ]
diff --git a/docs/use_cases/chatbots.md b/docs/use_cases/chatbots.md
index f2b37b6320b312ea7aa5c04f57e6f52cc09b4498..727884c2b1bb41f7cd714e73583fe4500ef280a5 100644
--- a/docs/use_cases/chatbots.md
+++ b/docs/use_cases/chatbots.md
@@ -10,7 +10,7 @@ Here are some relevant resources:
 - [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you
 - [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings
 - [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit
-- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chat bots in nature
+- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chatbots in nature
 
 ## External sources
 
diff --git a/docs/use_cases/multimodal.md b/docs/use_cases/multimodal.md
index 5aa7fba00a40061e090f431194349fd668a94c43..42c5e837439b6eb6cc2be4a6bfaf8fe74085dded 100644
--- a/docs/use_cases/multimodal.md
+++ b/docs/use_cases/multimodal.md
@@ -1,10 +1,10 @@
 # Multi-modal
 
-LlamaIndex offers capabilities to not only build language-based applications, but also **multi-modal** applications - combining language and images.
+LlamaIndex offers capabilities to not only build language-based applications but also **multi-modal** applications - combining language and images.
 
 ## Types of Multi-modal Use Cases
 
-This space is actively being explored right now, but there are some fascinating use cases popping up.
+This space is actively being explored right now, but some fascinating use cases are popping up.
 
 ### RAG (Retrieval Augmented Generation)
 
@@ -73,7 +73,7 @@ maxdepth: 1
 
 These sections show comparisons between different multi-modal models for different use cases.
 
-### LLaVa-13, Fuyu-8B and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
+### LLaVa-13, Fuyu-8B, and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
 
 These notebooks show how to use different Multi-Modal LLM models for image understanding/reasoning. The various model inferences are supported by Replicate or OpenAI GPT4-V API. We compared several popular Multi-Modal LLMs:
 
@@ -97,7 +97,7 @@ GPT4-V: </examples/multi_modal/openai_multi_modal.ipynb>
 
 ### Simple Evaluation of Multi-Modal RAG
 
-In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use-cases.
+In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded to in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use cases.
 
 ```{toctree}
 ---