diff --git a/CHANGELOG.md b/CHANGELOG.md
index 224fee307b7ba47aa228e0e33828d0cd56558834..fd9b09fd15ad3782718473eedc901a7d337aee64 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -109,7 +109,7 @@
 ### Breaking Changes / Deprecations
 
 - Added `LocalAI` demo and began deprecation cycle (#9151)
-- Deprecate `QueryResponseDataset` and `DatasetGenerator` of `evaluaton` module (#9165)
+- Deprecate `QueryResponseDataset` and `DatasetGenerator` of `evaluation` module (#9165)
 
 ### Bug Fixes / Nits
 
diff --git a/docs/changes/deprecated_terms.md b/docs/changes/deprecated_terms.md
index def9fc8ee0dbdd3bb7ce9832e922b78863bd5810..843da0e7928194f210317df29ae625b0ce8ab4b8 100644
--- a/docs/changes/deprecated_terms.md
+++ b/docs/changes/deprecated_terms.md
@@ -24,7 +24,7 @@ This has been renamed to `VectorStoreIndex`, but it is only a cosmetic change. P
 
 ## LLMPredictor
 
-The `LLMPredictor` object is no longer intended to be used by users. Instead, you can setup an LLM directly and pass it into the `ServiceContext`. THe `LLM` class itself has similar attributes and methods as the `LLMPredictor`.
+The `LLMPredictor` object is no longer intended to be used by users. Instead, you can setup an LLM directly and pass it into the `ServiceContext`. The `LLM` class itself has similar attributes and methods as the `LLMPredictor`.
 
 - [LLMs in LlamaIndex](/module_guides/models/llms.md)
 - [Setting LLMs in the ServiceContext](/module_guides/supporting_modules/service_context.md)
diff --git a/docs/community/integrations/graph_stores.md b/docs/community/integrations/graph_stores.md
index 9587d18ae8acb48c4713cd3148ef022248288bf4..f9b49dace838c9b01fc03e5e3ba6474162a9566d 100644
--- a/docs/community/integrations/graph_stores.md
+++ b/docs/community/integrations/graph_stores.md
@@ -2,7 +2,7 @@
 
 ## `Neo4jGraphStore`
 
-`Neo4j` is supported as a graph store integration. You can persist, visualze, and query graphs using LlamaIndex and Neo4j. Furthermore, existing Neo4j graphs are directly supported using `text2cypher` and the `KnowledgeGraphQueryEngine`.
+`Neo4j` is supported as a graph store integration. You can persist, visualize, and query graphs using LlamaIndex and Neo4j. Furthermore, existing Neo4j graphs are directly supported using `text2cypher` and the `KnowledgeGraphQueryEngine`.
 
 If you've never used Neo4j before, you can download the desktop client [here](https://neo4j.com/download/).
 
diff --git a/docs/community/integrations/guidance.md b/docs/community/integrations/guidance.md
index 130173d7adebde4331025cdb51c6ea560fa8c2f5..2d10cc5a8463df91eeb2bde3c1321ea3deee4e1c 100644
--- a/docs/community/integrations/guidance.md
+++ b/docs/community/integrations/guidance.md
@@ -33,7 +33,7 @@ and supplying a suitable prompt template.
 
 > Note: guidance uses handlebars-style templates, which uses double braces for variable substitution, and single braces for literal braces. This is the opposite convention of Python format strings.
 
-> Note: We provide an utility function `from llama_index.prompts.guidance_utils import convert_to_handlebars` that can convert from the Python format string style template to guidance handlebars-style template.
+> Note: We provide a utility function `from llama_index.prompts.guidance_utils import convert_to_handlebars` that can convert from the Python format string style template to guidance handlebars-style template.
 
 ```python
 program = GuidancePydanticProgram(
@@ -70,7 +70,7 @@ You can play with [this notebook](/examples/output_parsing/guidance_pydantic_pro
 ### Using guidance to improve the robustness of our sub-question query engine.
 
 LlamaIndex provides a toolkit of advanced query engines for tackling different use-cases.
-Several relies on structured output in intermediate steps.
+Several rely on structured output in intermediate steps.
 We can use guidance to improve the robustness of these query engines, by making sure the
 intermediate response has the expected structure (so that they can be parsed correctly to a structured object).
 
diff --git a/docs/community/integrations/vector_stores.md b/docs/community/integrations/vector_stores.md
index 38fc627cef6119662b2a8b9195517a866ebe0dd0..05c24f091193ea967cfbfe20d69f8c993f2428f2 100644
--- a/docs/community/integrations/vector_stores.md
+++ b/docs/community/integrations/vector_stores.md
@@ -45,7 +45,7 @@ Once constructed, the index can be used for querying.
 
 **Default Vector Store Index Construction/Querying**
 
-By default, `VectorStoreIndex` uses a in-memory `SimpleVectorStore`
+By default, `VectorStoreIndex` uses an in-memory `SimpleVectorStore`
 that's initialized as part of the default storage context.
 
 ```python
diff --git a/docs/getting_started/concepts.md b/docs/getting_started/concepts.md
index 3dc1097a541568e6b36475e3dea7010804273cb8..60bf00538775d66ea31d681d5cab7be2693a7841 100644
--- a/docs/getting_started/concepts.md
+++ b/docs/getting_started/concepts.md
@@ -38,7 +38,7 @@ There are also some terms you'll encounter that refer to steps within each of th
 
 ### Loading stage
 
-[**Nodes and Documents**](/module_guides/loading/documents_and_nodes/root.md): A `Document` is a container around any data source - for instance, a PDF, an API output, or retrieved data from a database. A `Node` is the atomic unit of data in LlamaIndex and represents a "chunk" of a source `Document`. Nodes have metadata that relate them to the document they are in and to other nodes.
+[**Nodes and Documents**](/module_guides/loading/documents_and_nodes/root.md): A `Document` is a container around any data source - for instance, a PDF, an API output, or retrieve data from a database. A `Node` is the atomic unit of data in LlamaIndex and represents a "chunk" of a source `Document`. Nodes have metadata that relate them to the document they are in and to other nodes.
 
 [**Connectors**](/module_guides/loading/connector/root.md):
 A data connector (often called a `Reader`) ingests data from different data sources and data formats into `Document`s and `Nodes`.
@@ -48,7 +48,7 @@ A data connector (often called a `Reader`) ingests data from different data sour
 [**Indexes**](/module_guides/indexing/indexing.md):
 Once you've ingested your data, LlamaIndex will help you index the data into a structure that's easy to retrieve. This usually involves generating `vector embeddings` which are stored in a specialized database called a `vector store`. Indexes can also store a variety of metadata about your data.
 
-[**Embeddings**](/module_guides/models/embeddings.md) LLMs generate numerical representations of data called `embeddings`. When filtering your data for relevance, LlamaIndex will convert queries into embeddings, and your vector store will find data which is numerically similar to the embedding of your query.
+[**Embeddings**](/module_guides/models/embeddings.md) LLMs generate numerical representations of data called `embeddings`. When filtering your data for relevance, LlamaIndex will convert queries into embeddings, and your vector store will find data that is numerically similar to the embedding of your query.
 
 ### Querying Stage
 
@@ -56,7 +56,7 @@ Once you've ingested your data, LlamaIndex will help you index the data into a s
 A retriever defines how to efficiently retrieve relevant context from an index when given a query. Your retrieval strategy is key to the relevancy of the data retrieved and the efficiency with which it's done.
 
 [**Routers**](/module_guides/querying/router/root.md):
-A router determines which retriever will be used to retrieve relevant context from the knowledge base. More specifically, the `RouterRetriever` class, are responsible for selecting one or multiple candidate retrievers to execute a query. They use a selector to choose the best option based on each candidate's metadata and the query.
+A router determines which retriever will be used to retrieve relevant context from the knowledge base. More specifically, the `RouterRetriever` class, is responsible for selecting one or multiple candidate retrievers to execute a query. They use a selector to choose the best option based on each candidate's metadata and the query.
 
 [**Node Postprocessors**](/module_guides/querying/node_postprocessors/root.md):
 A node postprocessor takes in a set of retrieved nodes and applies transformations, filtering, or re-ranking logic to them.
@@ -69,13 +69,13 @@ A response synthesizer generates a response from an LLM, using a user query and
 There are endless use cases for data-backed LLM applications but they can be roughly grouped into three categories:
 
 [**Query Engines**](/module_guides/deploying/query_engine/root.md):
-A query engine is an end-to-end pipeline that allow you to ask question over your data. It takes in a natural language query, and returns a response, along with reference context retrieved and passed to the LLM.
+A query engine is an end-to-end pipeline that allows you to ask questions over your data. It takes in a natural language query, and returns a response, along with reference context retrieved and passed to the LLM.
 
 [**Chat Engines**](/module_guides/deploying/chat_engines/root.md):
-A chat engine is an end-to-end pipeline for having a conversation with your data (multiple back-and-forth instead of a single question & answer).
+A chat engine is an end-to-end pipeline for having a conversation with your data (multiple back-and-forth instead of a single question-and-answer).
 
 [**Agents**](/module_guides/deploying/agents/root.md):
-An agent is an automated decision maker powered by an LLM that interacts with the world via a set of [tools](/module_guides/deploying/agents/tools/llamahub_tools_guide.md). Agents can take an arbitrary number of steps to complete a given task, dynamically deciding on the best course of action rather than following pre-determined steps. This gives it additional flexibility to tackle more complex tasks.
+An agent is an automated decision-maker powered by an LLM that interacts with the world via a set of [tools](/module_guides/deploying/agents/tools/llamahub_tools_guide.md). Agents can take an arbitrary number of steps to complete a given task, dynamically deciding on the best course of action rather than following pre-determined steps. This gives it additional flexibility to tackle more complex tasks.
 
 ```{admonition} Next Steps
 * Tell me how to [customize things](/getting_started/customization.rst)
diff --git a/docs/getting_started/discover_llamaindex.md b/docs/getting_started/discover_llamaindex.md
index ee16aa02edf17d8cc24f92c50ed7a886e86120e1..8b75c8833a90bdce939338b54509af0fcce5b163 100644
--- a/docs/getting_started/discover_llamaindex.md
+++ b/docs/getting_started/discover_llamaindex.md
@@ -25,11 +25,11 @@ This video covers the `SubQuestionQueryEngine` and how it can be applied to fina
 
 ## Discord Document Management
 
-This video covers managing documents from a source that is consantly updating (i.e Discord) and how you can avoid document duplication and save embedding tokens.
+This video covers managing documents from a source that is constantly updating (i.e Discord) and how you can avoid document duplication and save embedding tokens.
 
 [Youtube](https://www.youtube.com/watch?v=j6dJcODLd_c)
 
-[Notebook + Supplementary Material](https://github.com/jerryjliu/llama_index/tree/main/docs/examples/discover_llamaindex/document_management/)
+[Notebook and Supplementary Material](https://github.com/jerryjliu/llama_index/tree/main/docs/examples/discover_llamaindex/document_management/)
 
 [Reference Docs](/module_guides/indexing/document_management.md)
 
diff --git a/docs/module_guides/models/llms.md b/docs/module_guides/models/llms.md
index 40645d63e93c8fe7461a4fe2a1b44d3b85d87c31..cce3d549dc7520ca706cb845fa1190a890665c66 100644
--- a/docs/module_guides/models/llms.md
+++ b/docs/module_guides/models/llms.md
@@ -69,7 +69,7 @@ The tables below attempt to validate the **initial** experience with various Lla
 
 Generally, paid APIs such as OpenAI or Anthropic are viewed as more reliable. However, local open-source models have been gaining popularity due to their customizability and approach to transparency.
 
-**Contributing:** Anyone is welcome to contribute new LLMs to the documentation. Simply copy an existing notebook, setup and test your LLM, and open a PR with your resutls.
+**Contributing:** Anyone is welcome to contribute new LLMs to the documentation. Simply copy an existing notebook, setup and test your LLM, and open a PR with your results.
 
 If you have ways to improve the setup for existing notebooks, contributions to change this are welcome!
 
diff --git a/docs/optimizing/advanced_retrieval/query_transformations.md b/docs/optimizing/advanced_retrieval/query_transformations.md
index d10af98c3859dfa19057ea05cc288cae96a33de3..6ab0750d6ac7245780f72f3bf4f6a3068aaa829b 100644
--- a/docs/optimizing/advanced_retrieval/query_transformations.md
+++ b/docs/optimizing/advanced_retrieval/query_transformations.md
@@ -101,7 +101,7 @@ Check out our [example notebook](https://github.com/jerryjliu/llama_index/blob/m
 Multi-step query transformations are a generalization on top of existing single-step query transformation approaches.
 
 Given an initial, complex query, the query is transformed and executed against an index. The response is retrieved from the query.
-Given the response (along with prior responses) and the query, followup questions may be asked against the index as well. This technique allows a query to be run against a single knowledge source until that query has satisfied all questions.
+Given the response (along with prior responses) and the query, follow-up questions may be asked against the index as well. This technique allows a query to be run against a single knowledge source until that query has satisfied all questions.
 
 An example image is shown below.
 
diff --git a/docs/optimizing/basic_strategies/basic_strategies.md b/docs/optimizing/basic_strategies/basic_strategies.md
index 3db6d39314f20608e2430337050a38a7ceb5ed59..ee83d7c38af5dfba68765dc6349700fc998e0796 100644
--- a/docs/optimizing/basic_strategies/basic_strategies.md
+++ b/docs/optimizing/basic_strategies/basic_strategies.md
@@ -49,7 +49,7 @@ We have a list of [all supported embedding model integrations](/module_guides/mo
 
 Depending on the type of data you are indexing, or the results from your retrieval, you may want to customize the chunk size or chunk overlap.
 
-When documents are ingested into an index, the are split into chunks with a certain amount of overlap. The default chunk size is 1024, while the default chunk overlap is 20.
+When documents are ingested into an index, they are split into chunks with a certain amount of overlap. The default chunk size is 1024, while the default chunk overlap is 20.
 
 Changing either of these parameters will change the embeddings that are calculated. A smaller chunk size means the embeddings are more precise, while a larger chunk size means that the embeddings may be more general, but can miss fine-grained details.
 
diff --git a/docs/optimizing/building_rag_from_scratch.md b/docs/optimizing/building_rag_from_scratch.md
index 639d0b6aaa50941cfca9b6ed6003cc4098cfd1ff..dfb9a7ca266614bcc1c68f2e9e68d3cffe33e0a8 100644
--- a/docs/optimizing/building_rag_from_scratch.md
+++ b/docs/optimizing/building_rag_from_scratch.md
@@ -25,7 +25,7 @@ maxdepth: 1
 
 ## Building Vector Retrieval from Scratch
 
-This tutorial shows you how to build a retriever to query an vector store.
+This tutorial shows you how to build a retriever to query a vector store.
 
 ```{toctree}
 ---
diff --git a/docs/use_cases/agents.md b/docs/use_cases/agents.md
index 9d2ffb86d07859cc38fa7e35d3272b5fc44bfed0..5b7664afe45d1808798539d17996dc91b72d57e7 100644
--- a/docs/use_cases/agents.md
+++ b/docs/use_cases/agents.md
@@ -21,7 +21,7 @@ In general, LlamaIndex components offer more explicit, constrained behavior for
 capable of general reasoning.
 
 There are tradeoffs for using both - less-capable LLMs typically do better with more constraints. Take a look at [our blog post on this](https://medium.com/llamaindex-blog/dumber-llm-agents-need-more-constraints-and-better-tools-17a524c59e12) for
-a more information + a detailed analysis.
+more information + a detailed analysis.
 
 ## Learn more
 
diff --git a/examples/gatsby/gatsby_license.txt b/examples/gatsby/gatsby_license.txt
index c5ded33b9fae8f1cde7da4b115548bc92ef5df33..f0a6f5055b8cdfe416aa2103399506a8741accdf 100644
--- a/examples/gatsby/gatsby_license.txt
+++ b/examples/gatsby/gatsby_license.txt
@@ -38,13 +38,13 @@ Section 1. General Terms of Use and Redistributing Project
 Gutenberg-tm electronic works
 
 1.A. By reading or using any part of this Project Gutenberg-tm
-electronic work, you indicate that you have read, understand, agree to
-and accept all the terms of this license and intellectual property
+electronic work, you indicate that you have read, understood, and agreed to
+accept all the terms of this license and intellectual property
 (trademark/copyright) agreement. If you do not agree to abide by all
 the terms of this agreement, you must cease using and return or
 destroy all copies of Project Gutenberg-tm electronic works in your
 possession. If you paid a fee for obtaining a copy of or access to a
-Project Gutenberg-tm electronic work and you do not agree to be bound
+Project Gutenberg-tm electronic work and you disagree to be bound
 by the terms of this agreement, you may obtain a refund from the
 person or entity to whom you paid the fee as set forth in paragraph
 1.E.8.
@@ -70,7 +70,7 @@ displaying or creating derivative works based on the work as long as
 all references to Project Gutenberg are removed. Of course, we hope
 that you will support the Project Gutenberg-tm mission of promoting
 free access to electronic works by freely sharing Project Gutenberg-tm
-works in compliance with the terms of this agreement for keeping the
+works in compliance with the terms of this agreement to keep the
 Project Gutenberg-tm name associated with the work. You can easily
 comply with the terms of this agreement by keeping this work in the
 same format with its attached full Project Gutenberg-tm License when
@@ -122,7 +122,7 @@ will be linked to the Project Gutenberg-tm License for all works
 posted with the permission of the copyright holder found at the
 beginning of this work.
 
-1.E.4. Do not unlink or detach or remove the full Project Gutenberg-tm
+1.E.4. Do not unlink, detach or remove the full Project Gutenberg-tm
 License terms from this work, or any files containing a part of this
 work or any other work associated with Project Gutenberg-tm.
 
@@ -137,7 +137,7 @@ compressed, marked up, nonproprietary or proprietary form, including
 any word processing or hypertext form. However, if you provide access
 to or distribute copies of a Project Gutenberg-tm work in a format
 other than "Plain Vanilla ASCII" or other format used in the official
-version posted on the official Project Gutenberg-tm web site
+version posted on the official Project Gutenberg-tm website
 (www.gutenberg.org), you must, at no additional cost, fee or expense
 to the user, provide a copy, a means of exporting a copy, or a means
 of obtaining a copy upon request, of the work in its original "Plain
@@ -315,7 +315,7 @@ against accepting unsolicited donations from donors in such states who
 approach us with offers to donate.
 
 International donations are gratefully accepted, but we cannot make
-any statements concerning tax treatment of donations received from
+any statements concerning the tax treatment of donations received from
 outside the United States. U.S. laws alone swamp our small staff.
 
 Please check the Project Gutenberg Web pages for current donation