diff --git a/docs/getting_started/v0_10_0_migration.md b/docs/getting_started/v0_10_0_migration.md
index b25cfb29022e2efcb2d5fc531006a850298fdaa8..27d5bc639013443e7f1c7984ea43cdc67d372d03 100644
--- a/docs/getting_started/v0_10_0_migration.md
+++ b/docs/getting_started/v0_10_0_migration.md
@@ -2,9 +2,9 @@
 
 With the introduction of LlamaIndex v0.10.0, there were several changes
 
-- integrations have separate `pip installs (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
+- integrations have separate `pip install`s (See the [full registry](https://pretty-sodium-5e0.notion.site/ce81b247649a44e4b6b35dfb24af28a6?v=53b3c2ced7bb4c9996b81b83c9f01139))
 - many imports changed
-- the service context was deprecated
+- the `ServiceContext` was deprecated
 
 Thankfully, we've tried to make these changes as easy as possible!
 
@@ -72,7 +72,7 @@ from llama_index.core import Settings
 
 Settings.llm = llm
 Settings.embed_model = embed_model
-Setting.chunk_size = 512
+Settings.chunk_size = 512
 ```
 
 You can see the `ServiceContext` -> `Settings` migration guide for [more details](/module_guides/supporting_modules/service_context_migration.md).
diff --git a/docs/index.rst b/docs/index.rst
index 7799e24493afb485542e895cb47552215669da1f..ea8fe43e671c57ca4391fea27b9428c38e43ffff 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -19,7 +19,7 @@ You may choose to **fine-tune** a LLM with your data, but:
 - Due to the cost to train, it's **hard to update** a LLM with latest information.
 - **Observability** is lacking. When you ask a LLM a question, it's not obvious how the LLM arrived at its answer.
 
-Instead of fine-tuning, one can a context augmentation pattern called `Retrieval-Augmented Generation (RAG) <./getting_started/concepts.html>`_ to obtain more accurate text generation relevant to your specific data. RAG involves the following high level steps:
+Instead of fine-tuning, one can use a context augmentation pattern called `Retrieval-Augmented Generation (RAG) <./getting_started/concepts.html>`_ to obtain more accurate text generation relevant to your specific data. RAG involves the following high level steps:
 
 1. Retrieve information from your data sources first,
 2. Add it to your question as context, and
@@ -36,7 +36,7 @@ In doing so, RAG overcomes all three weaknesses of the fine-tuning approach:
 
 Firstly, LlamaIndex imposes no restriction on how you use LLMs. You can still use LLMs as auto-complete, chatbots, semi-autonomous agents, and more (see Use Cases on the left). It only makes LLMs more relevant to you.
 
-LlamaIndex provides the following tools to help you quickly standup production-ready RAG systems:
+LlamaIndex provides the following tools to help you quickly stand up production-ready RAG systems:
 
 - **Data connectors** ingest your existing data from their native source and format. These could be APIs, PDFs, SQL, and (much) more.
 - **Data indexes** structure your data in intermediate representations that are easy and performant for LLMs to consume.
@@ -70,7 +70,7 @@ We recommend starting at `how to read these docs <./getting_started/reading.html
 
 To download or contribute, find LlamaIndex on:
 
-- Github: https://github.com/jerryjliu/llama_index
+- Github: https://github.com/run-llama/llama_index
 - PyPi:
 
   - LlamaIndex: https://pypi.org/project/llama-index/.