From 660cf70abcf8cb033e523cbfde1434fef2a840b6 Mon Sep 17 00:00:00 2001
From: Shorthills AI <141953346+ShorthillsAI@users.noreply.github.com>
Date: Tue, 13 Feb 2024 21:08:10 +0530
Subject: [PATCH] Fixed some gramatical mistakes in several doc files (#10627)

---
 docs/getting_started/discover_llamaindex.md   | 4 ++--
 docs/getting_started/installation.md          | 4 ++--
 docs/getting_started/reading.md               | 4 ++--
 docs/getting_started/starter_example.md       | 4 ++--
 docs/getting_started/starter_example_local.md | 2 +-
 docs/getting_started/v0_10_0_migration.md     | 6 +++---
 6 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/docs/getting_started/discover_llamaindex.md b/docs/getting_started/discover_llamaindex.md
index 8b75c8833a..4a1857ed7a 100644
--- a/docs/getting_started/discover_llamaindex.md
+++ b/docs/getting_started/discover_llamaindex.md
@@ -6,7 +6,7 @@ If you like learning from videos, now's a good time to check out our "Discover L
 
 This is a sub-series within Discover LlamaIndex that shows you how to build a document chatbot from scratch.
 
-We show you how to do this in a "bottoms-up" fashion - start by using the LLMs, data objects as independent modules. Then gradually add higher-level abstractions like indexing, and advanced retrievers/rerankers.
+We show you how to do this in a "bottoms-up" fashion - start by using the LLMs, and data objects as independent modules. Then gradually add higher-level abstractions like indexing, and advanced retrievers/rerankers.
 
 [Full Repo](https://github.com/run-llama/llama_docs_bot)
 [[Part 1] LLMs and Prompts](https://www.youtube.com/watch?v=p0jcvGiBKSA)
@@ -25,7 +25,7 @@ This video covers the `SubQuestionQueryEngine` and how it can be applied to fina
 
 ## Discord Document Management
 
-This video covers managing documents from a source that is constantly updating (i.e Discord) and how you can avoid document duplication and save embedding tokens.
+This video covers managing documents from a source that is constantly updating (i.e. Discord) and how you can avoid document duplication and save embedding tokens.
 
 [Youtube](https://www.youtube.com/watch?v=j6dJcODLd_c)
 
diff --git a/docs/getting_started/installation.md b/docs/getting_started/installation.md
index b5d0f22b6d..fcef93dcb4 100644
--- a/docs/getting_started/installation.md
+++ b/docs/getting_started/installation.md
@@ -52,7 +52,7 @@ pip install llama-index-core llama-index-readers-file llama-index-llms-ollama ll
 
 [Check out our Starter Example with Local Models](starter_example_local.md)
 
-A full guide to using and configuring LLMs available [here](/module_guides/models/llms.md).
+A full guide to using and configuring LLMs is available [here](/module_guides/models/llms.md).
 
 A full guide to using and configuring embedding models is available [here](/module_guides/models/embeddings.md).
 
@@ -63,7 +63,7 @@ Git clone this repository: `git clone https://github.com/jerryjliu/llama_index.g
 - [Install poetry](https://python-poetry.org/docs/#installation) - this will help you manage package dependencies
 - `poetry shell` - this command creates a virtual environment, which keeps installed packages contained to this project
 - `poetry install` - this will install the core starter package requirements
-- (Optional) `poetry install --with dev,docs` - this will install all dependencies needed for most local development
+- (Optional) `poetry install --with dev, docs` - this will install all dependencies needed for most local development
 
 From there, you can install integrations as needed with `pip`, For example:
 
diff --git a/docs/getting_started/reading.md b/docs/getting_started/reading.md
index cdba1c1425..10187869bb 100644
--- a/docs/getting_started/reading.md
+++ b/docs/getting_started/reading.md
@@ -14,7 +14,7 @@ Our docs are structured so you should be able to roughly progress simply by movi
 
 1. **Getting started**
 
-   The section you're in right now. We can get you going from knowing nothing about LlamaIndex and LLMs. [Install the library](installation.md), write your first demo in [five lines of code](starter_example.md), learn more about the [high level concepts](concepts.md) of LLM applications and then see how you can [customize the five-line example](customization.rst) to meet your needs.
+   The section you're in right now. We can get you going from knowing nothing about LlamaIndex and LLMs. [Install the library](installation.md), write your first demo in [five lines of code](starter_example.md), learn more about the [high level concepts](concepts.md) of LLM applications, and then see how you can [customize the five-line example](customization.rst) to meet your needs.
 
 2. **Use cases**
 
@@ -22,7 +22,7 @@ Our docs are structured so you should be able to roughly progress simply by movi
 
 3. **Understanding LlamaIndex**
 
-   Once you've completed the Getting Started section, this is the next place to go. In a series of bite-sized tutorials we'll walk you through every stage of building a production LlamaIndex application and help you level up on the concepts of the library and LLMs in general as you go.
+   Once you've completed the Getting Started section, this is the next place to go. In a series of bite-sized tutorials, we'll walk you through every stage of building a production LlamaIndex application and help you level up on the concepts of the library and LLMs in general as you go.
 
 4. **Optimizing**
 
diff --git a/docs/getting_started/starter_example.md b/docs/getting_started/starter_example.md
index 6e029c94a0..8b8e1b460d 100644
--- a/docs/getting_started/starter_example.md
+++ b/docs/getting_started/starter_example.md
@@ -24,7 +24,7 @@ LlamaIndex uses OpenAI's `gpt-3.5-turbo` by default. Make sure your API key is a
 export OPENAI_API_KEY=XXXXX
 ```
 
-and on windows it is
+and on Windows it is
 
 ```
 set OPENAI_API_KEY=XXXXX
@@ -111,7 +111,7 @@ else:
     storage_context = StorageContext.from_defaults(persist_dir=PERSIST_DIR)
     index = load_index_from_storage(storage_context)
 
-# either way we can now query the index
+# Either way we can now query the index
 query_engine = index.as_query_engine()
 response = query_engine.query("What did the author do growing up?")
 print(response)
diff --git a/docs/getting_started/starter_example_local.md b/docs/getting_started/starter_example_local.md
index e0dfc9cc76..4785a2ac1f 100644
--- a/docs/getting_started/starter_example_local.md
+++ b/docs/getting_started/starter_example_local.md
@@ -14,7 +14,7 @@ The easiest way to get it is to [download it via this link](https://raw.githubus
 
 ## Setup
 
-Ollama is a tool to help you get setup with LLMs locally (currently supported on OSX and Linux. You can install Ollama on Windows through WSL 2).
+Ollama is a tool to help you get set up with LLMs locally (currently supported on OSX and Linux. You can install Ollama on Windows through WSL 2).
 
 Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it.
 
diff --git a/docs/getting_started/v0_10_0_migration.md b/docs/getting_started/v0_10_0_migration.md
index 16198c3bc8..b25cfb2902 100644
--- a/docs/getting_started/v0_10_0_migration.md
+++ b/docs/getting_started/v0_10_0_migration.md
@@ -2,7 +2,7 @@
 
 With the introduction of LlamaIndex v0.10.0, there were several changes
 
-- integrations have seperate `pip install`s (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
+- integrations have separate `pip installs (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
 - many imports changed
 - the service context was deprecated
 
@@ -46,13 +46,13 @@ llamaindex-cli upgrade-file <file_path>
 llamaindex-cli upgrade <folder_path>
 ```
 
-For notebooks, new `pip install` statements are inserting and imports are updated.
+For notebooks, new `pip install` statements are inserted and imports are updated.
 
 For `.py` and `.md` files, import statements are also updated, and new requirements are printed to the terminal.
 
 ## Deprecated ServiceContext
 
-In addition to import changes, the existing `ServiceContext` has been deprecated. While it will be supported for a limited time, the preffered way of setting up the same options will be either globally in the `Settings` object or locally in the APIs that use certain modules.
+In addition to import changes, the existing `ServiceContext` has been deprecated. While it will be supported for a limited time, the preferred way of setting up the same options will be either globally in the `Settings` object or locally in the APIs that use certain modules.
 
 For example, before you might have had:
 
-- 
GitLab