From 53e9f5233bf45f752ffe6bd9cd5cb011b5137970 Mon Sep 17 00:00:00 2001
From: Shorthills AI <141953346+ShorthillsAI@users.noreply.github.com>
Date: Fri, 1 Mar 2024 12:54:43 +0530
Subject: [PATCH] Fixed some minor gramatical issues (#11530)

* Fixed some gramatical mistakes (#82)

* Update discover_llamaindex.md

* Update installation.md

* Update reading.md

* Update starter_example.md

* Update starter_example_local.md

* Update v0_10_0_migration.md

* Update 2024-02-28-rag-bootcamp-vector-institute.ipynb

* Update multimodal.md

* Update chatbots.md
---
 .../2024-02-28-rag-bootcamp-vector-institute.ipynb        | 4 ++--
 docs/use_cases/chatbots.md                                | 2 +-
 docs/use_cases/multimodal.md                              | 8 ++++----
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb b/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
index 4524113a39..073e5a818b 100644
--- a/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
+++ b/docs/presentations/materials/2024-02-28-rag-bootcamp-vector-institute.ipynb
@@ -121,7 +121,7 @@
       "\n",
       "3. Declaration of Research Assessment: In academia, this could refer to a statement or policy regarding how research is evaluated.\n",
       "\n",
-      "4. Digital on-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
+      "4. Digital On-Ramp's Assessment: In the field of digital technology, this could refer to an assessment tool used by the Digital On-Ramps program.\n",
       "\n",
       "Please provide more context for a more accurate definition.\n"
      ]
@@ -371,7 +371,7 @@
    "source": [
     "## In Summary\n",
     "\n",
-    "- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain specific, updated data, long-tail)\n",
+    "- LLMs as powerful as they are, don't perform too well with knowledge-intensive tasks (domain-specific, updated data, long-tail)\n",
     "- Context augmentation has been shown (in a few studies) to outperform LLMs without augmentation\n",
     "- In this notebook, we showed one such example that follows that pattern."
    ]
diff --git a/docs/use_cases/chatbots.md b/docs/use_cases/chatbots.md
index f2b37b6320..727884c2b1 100644
--- a/docs/use_cases/chatbots.md
+++ b/docs/use_cases/chatbots.md
@@ -10,7 +10,7 @@ Here are some relevant resources:
 - [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you
 - [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings
 - [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit
-- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chat bots in nature
+- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chatbots in nature
 
 ## External sources
 
diff --git a/docs/use_cases/multimodal.md b/docs/use_cases/multimodal.md
index 5aa7fba00a..42c5e83743 100644
--- a/docs/use_cases/multimodal.md
+++ b/docs/use_cases/multimodal.md
@@ -1,10 +1,10 @@
 # Multi-modal
 
-LlamaIndex offers capabilities to not only build language-based applications, but also **multi-modal** applications - combining language and images.
+LlamaIndex offers capabilities to not only build language-based applications but also **multi-modal** applications - combining language and images.
 
 ## Types of Multi-modal Use Cases
 
-This space is actively being explored right now, but there are some fascinating use cases popping up.
+This space is actively being explored right now, but some fascinating use cases are popping up.
 
 ### RAG (Retrieval Augmented Generation)
 
@@ -73,7 +73,7 @@ maxdepth: 1
 
 These sections show comparisons between different multi-modal models for different use cases.
 
-### LLaVa-13, Fuyu-8B and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
+### LLaVa-13, Fuyu-8B, and MiniGPT-4 Multi-Modal LLM Models Comparison for Image Reasoning
 
 These notebooks show how to use different Multi-Modal LLM models for image understanding/reasoning. The various model inferences are supported by Replicate or OpenAI GPT4-V API. We compared several popular Multi-Modal LLMs:
 
@@ -97,7 +97,7 @@ GPT4-V: </examples/multi_modal/openai_multi_modal.ipynb>
 
 ### Simple Evaluation of Multi-Modal RAG
 
-In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use-cases.
+In this notebook guide, we'll demonstrate how to evaluate a Multi-Modal RAG system. As in the text-only case, we will consider the evaluation of Retrievers and Generators separately. As we alluded to in our blog on the topic of Evaluating Multi-Modal RAGs, our approach here involves the application of adapted versions of the usual techniques for evaluating both Retriever and Generator (used for the text-only case). These adapted versions are part of the llama-index library (i.e., evaluation module), and this notebook will walk you through how you can apply them to your evaluation use cases.
 
 ```{toctree}
 ---
-- 
GitLab