diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2dc4e36babb60ba6e02b81289482175f6a0a782a..6093029b4362264c027433700137ce0205b75aed 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -34,7 +34,7 @@
 - Gemini Model Checks (#9563)
 - Update OpenAI fine-tuning with latest changes (#9564)
 - fix/Reintroduce `WHERE` filter to the Sparse Query for PgVectorStore (#9529)
-- Update Ollama API for ollama v0.1.16 (#9558)
+- Update Ollama API to ollama v0.1.16 (#9558)
 - ollama: strip invalid `formatted` option (#9555)
 - add a device in optimum push #9541 (#9554)
 - Title vs content difference for Gemini Embedding (#9547)
@@ -206,7 +206,7 @@
 ### Bug Fixes / Nits
 
 - Fixed bug in formatting chat prompt templates when estimating chunk sizes (#9025)
-- Sandboxed Pandas execution, remidiate CVE-2023-39662 (#8890)
+- Sandboxed Pandas execution, remediate CVE-2023-39662 (#8890)
 - Restored `mypy` for Python 3.8 (#9031)
 - Loosened `dataclasses-json` version range,
   and removes unnecessary `jinja2` extra from `pandas` (#9042)
diff --git a/docs/module_guides/models/embeddings.md b/docs/module_guides/models/embeddings.md
index d1ef3fb9382370bf9b90854c7c1479a39a85d926..4706e077d8af01b41bd56c100e27039f379ca5c4 100644
--- a/docs/module_guides/models/embeddings.md
+++ b/docs/module_guides/models/embeddings.md
@@ -36,7 +36,7 @@ You can find more usage details and available customization options below.
 
 ## Getting Started
 
-The most common usage for an embedding model will be setting it in the service context object, and then using it to construct an index and query. The input documents will be broken into nodes, and the emedding model will generate an embedding for each node.
+The most common usage for an embedding model will be setting it in the service context object, and then using it to construct an index and query. The input documents will be broken into nodes, and the embedding model will generate an embedding for each node.
 
 By default, LlamaIndex will use `text-embedding-ada-002`, which is what the example below manually sets up for you.
 
@@ -47,7 +47,7 @@ from llama_index.embeddings import OpenAIEmbedding
 embed_model = OpenAIEmbedding()
 service_context = ServiceContext.from_defaults(embed_model=embed_model)
 
-# optionally set a global service context to avoid passing it into other objects every time
+# Optionally set a global service context to avoid passing it into other objects every time
 from llama_index import set_global_service_context
 
 set_global_service_context(service_context)
diff --git a/docs/understanding/understanding.md b/docs/understanding/understanding.md
index 2fd1afcb16ea2911607d30eb278c46175ec64fec..0f7126741fcf1e302097ef1caed3c19a5c539f0d 100644
--- a/docs/understanding/understanding.md
+++ b/docs/understanding/understanding.md
@@ -1,6 +1,6 @@
 # Building an LLM application
 
-Welcome to the start of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer new to LlamaIndex, this is the place to start.
+Welcome to the beginning of Understanding LlamaIndex. This is a series of short, bite-sized tutorials on every stage of building an LLM application to get you acquainted with how to use LlamaIndex before diving into more advanced and subtle strategies. If you're an experienced programmer new to LlamaIndex, this is the place to start.
 
 ## Key steps in building an LLM application
 
@@ -24,7 +24,7 @@ There are a series of key steps involved in building any LLM-powered application
 
 - **[Tracing and debugging](/understanding/tracing_and_debugging/tracing_and_debugging.md)**: also called **observability**, it's especially important with LLM applications to be able to look into the inner workings of what's going on to help you debug problems and spot places to improve.
 
-- **[Evaluating](/understanding/evaluating/evaluating.md)**: every strategy has pros and cons and a key part of building, shipping and evolving your application is evaluating whether your change has improved your application in terms of accuracy, performance, clarity, cost and more. Reliably evaluating your changes is a big part of LLM application development.
+- **[Evaluating](/understanding/evaluating/evaluating.md)**: every strategy has pros and cons and a key part of building, shipping and evolving your application is evaluating whether your change has improved your application in terms of accuracy, performance, clarity, cost and more. Reliably evaluating your changes is a crucial part of LLM application development.
 
 ## Let's get started!
 
diff --git a/docs/use_cases/agents.md b/docs/use_cases/agents.md
index 5b7664afe45d1808798539d17996dc91b72d57e7..7ecb67838c79b89c2f71171a9654335b3d4cfd20 100644
--- a/docs/use_cases/agents.md
+++ b/docs/use_cases/agents.md
@@ -25,4 +25,4 @@ more information + a detailed analysis.
 
 ## Learn more
 
-Our Putting It All Together section has [more on agents](/understanding/putting_it_all_together/agents.md)
+Our Putting It All Together section has [more on agents](/docs/understanding/putting_it_all_together/agents.md)
diff --git a/docs/use_cases/chatbots.md b/docs/use_cases/chatbots.md
index b75d3389a2e32b719c3137e01347253599c09032..a801dec80565162eed8ba68c3b4ff589bf785944 100644
--- a/docs/use_cases/chatbots.md
+++ b/docs/use_cases/chatbots.md
@@ -1,16 +1,16 @@
 # Chatbots
 
-Chatbots are another extremely popular use case for LLM's. Instead of a single question and answer, a chatbot can handle multiple back-and-forth queries and answers, getting clarification or answering follow-up questions.
+Chatbots are another extremely popular use case for LLMs. Instead of a single question and answer, a chatbot can handle multiple back-and-forth queries and answers, getting clarification or answering follow-up questions.
 
 LlamaIndex gives you the tools to build knowledge-augmented chatbots and agents.
 
-Here's some relevant resources:
+Here are some relevant resources:
 
-- [Building a chatbot](/understanding/putting_it_all_together/chatbots/building_a_chatbot.md) tutorial
+- [Building a chatbot](/docs/understanding/putting_it_all_together/chatbots/building_a_chatbot.md) tutorial
 - [create-llama](https://blog.llamaindex.ai/create-llama-a-command-line-tool-to-generate-llamaindex-apps-8f7683021191), a command line tool that generates a full-stack chatbot application for you
 - [SECinsights.ai](https://www.secinsights.ai/), an open-source application that uses LlamaIndex to build a chatbot that answers questions about SEC filings
 - [RAGs](https://blog.llamaindex.ai/introducing-rags-your-personalized-chatgpt-experience-over-your-data-2b9d140769b1), a project inspired by OpenAI's GPTs that lets you build a low-code chatbot over your data using Streamlit
-- Our [OpenAI agents](/module_guides/deploying/agents/modules.md) are all chat bots in nature
+- Our [OpenAI agents](/docs/module_guides/deploying/agents/modules.md) are all chat bots in nature
 
 ## External sources
 
diff --git a/docs/use_cases/extraction.md b/docs/use_cases/extraction.md
index 0554c3a0d1a0e4094b601a5bbadbd4cc31bb173f..e76b1ca0356406200e07015353149c8921156597 100644
--- a/docs/use_cases/extraction.md
+++ b/docs/use_cases/extraction.md
@@ -10,5 +10,5 @@ Once you have structured data you can send them to a database, or you can parse
 
 Examples:
 
-- [Extracting names and locations from descriptions of people](/examples/output_parsing/df_program.ipynb)
-- [Extracting album data from music reviews](/examples/llm/llama_api.ipynb)
+- [Extracting names and locations from descriptions of people](/docs/examples/output_parsing/df_program.ipynb)
+- [Extracting album data from music reviews](/docs/examples/llm/llama_api.ipynb)
diff --git a/docs/use_cases/multimodal.md b/docs/use_cases/multimodal.md
index f9e9aea98417598873e725421bd1efdeaa3a8f5c..e180f87c3fc17a0ed86a76b277bdc7392daa6f48 100644
--- a/docs/use_cases/multimodal.md
+++ b/docs/use_cases/multimodal.md
@@ -98,7 +98,7 @@ maxdepth: 1
 ### Using Chroma for Multi-Modal retrieval with single vector store
 
 Chroma vector DB supports single vector store for indexing both images and texts.
-Check out out Chroma + LlamaIndex integration with single Multi-Modal Vector Store for both images/texts index and retrieval.
+Check out Chroma + LlamaIndex integration with single Multi-Modal Vector Store for both images/texts index and retrieval.
 
 ```{toctree}
 ---