diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 4c02f30b00dea1077f05dfce976f24a843c12aeb..00e24aca83f678919ed160eb4da1ded6923d51a7 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -80,7 +80,7 @@ repos:
     rev: v3.0.3
     hooks:
       - id: prettier
-        exclude: llama-index-core/llama_index/core/_static|poetry.lock|llama-index-legacy/llama_index/legacy/_static|docs/docs/api_reference
+        exclude: llama-index-core/llama_index/core/_static|poetry.lock|llama-index-legacy/llama_index/legacy/_static|docs/docs
   - repo: https://github.com/codespell-project/codespell
     rev: v2.2.6
     hooks:
diff --git a/docs/docs/getting_started/concepts.md b/docs/docs/getting_started/concepts.md
index 6f2d7d62aefd3b1d9f09c74a96563f62643093c3..84f9f4cb6bd3d76e271bb3712bd6790ddf77acaf 100644
--- a/docs/docs/getting_started/concepts.md
+++ b/docs/docs/getting_started/concepts.md
@@ -3,7 +3,7 @@
 This is a quick guide to the high-level concepts you'll encounter frequently when building LLM applications.
 
 !!! tip
-If you haven't, [install LlamaIndex](./installation.md) and complete the [starter tutorial](./starter_example.md) before you read this. It will help ground these steps in your experience.
+    If you haven't, [install LlamaIndex](./installation.md) and complete the [starter tutorial](./starter_example.md) before you read this. It will help ground these steps in your experience.
 
 ## Retrieval Augmented Generation (RAG)
 
@@ -77,5 +77,6 @@ A chat engine is an end-to-end pipeline for having a conversation with your data
 An agent is an automated decision-maker powered by an LLM that interacts with the world via a set of [tools](../module_guides/deploying/agents/tools/llamahub_tools_guide.md). Agents can take an arbitrary number of steps to complete a given task, dynamically deciding on the best course of action rather than following pre-determined steps. This gives it additional flexibility to tackle more complex tasks.
 
 !!! tip
-_ Tell me how to [customize things](./customization.md)
-_ Continue learning with our [understanding LlamaIndex](../understanding/index.md) guide \* Ready to dig deep? Check out the module guides on the left
+    * Tell me how to [customize things](./customization.md)
+    * Continue learning with our [understanding LlamaIndex](../understanding/index.md) guide
+    * Ready to dig deep? Check out the [component guides](../module_guides/index.md)
diff --git a/docs/docs/getting_started/customization.md b/docs/docs/getting_started/customization.md
index d1eb32bf84e8d355bf6974023565bb8c176e2f36..58672402edf2a60067ad81e382e144ca2a4384e8 100644
--- a/docs/docs/getting_started/customization.md
+++ b/docs/docs/getting_started/customization.md
@@ -1,7 +1,7 @@
 # Frequently Asked Questions (FAQ)
 
 !!! tip
-If you haven't already, [install LlamaIndex](installation.md) and complete the [starter tutorial](starter_example.md). If you run into terms you don't recognize, check out the [high-level concepts](concepts.md).
+    If you haven't already, [install LlamaIndex](installation.md) and complete the [starter tutorial](starter_example.md). If you run into terms you don't recognize, check out the [high-level concepts](concepts.md).
 
 In this section, we start with the code you wrote for the [starter example](starter_example.md) and show you the most common ways you might want to customize it for your use case:
 
@@ -161,4 +161,4 @@ Learn more about the [chat engine](../module_guides/deploying/chat_engines/usage
 ## Next Steps
 
 - Want a thorough walkthrough of (almost) everything you can configure? Get started with [Understanding LlamaIndex](../understanding/index.md).
-- Want more in-depth understanding of specific modules? Check out the module guides in the left nav 👈
+- Want more in-depth understanding of specific modules? Check out the [component guides](../module_guides/index.md).
diff --git a/docs/docs/getting_started/installation.md b/docs/docs/getting_started/installation.md
index 99c56dc5f819f44bbbba584403495e012651accf..2cd24a267bf3a72431c927f51d7f555fff127e6c 100644
--- a/docs/docs/getting_started/installation.md
+++ b/docs/docs/getting_started/installation.md
@@ -34,7 +34,7 @@ By default, we use the OpenAI `gpt-3.5-turbo` model for text generation and `tex
 You can obtain an API key by logging into your OpenAI account and [and creating a new API key](https://platform.openai.com/account/api-keys).
 
 !!! tip
-You can also [use one of many other available LLMs](../module_guides/models/llms/usage_custom.md). You may need additional environment keys + tokens setup depending on the LLM provider.
+    You can also [use one of many other available LLMs](../module_guides/models/llms/usage_custom.md). You may need additional environment keys + tokens setup depending on the LLM provider.
 
 [Check out our OpenAI Starter Example](starter_example.md)
 
diff --git a/docs/docs/getting_started/starter_example.md b/docs/docs/getting_started/starter_example.md
index 43601e8529a14e2509998e75a20f0391c4a76e15..05be9f4d914e8dd038fb31eee61f1c41fd15afa8 100644
--- a/docs/docs/getting_started/starter_example.md
+++ b/docs/docs/getting_started/starter_example.md
@@ -3,11 +3,11 @@
 This is our famous "5 lines of code" starter example using OpenAI.
 
 !!! tip
-Make sure you've followed the [installation](installation.md) steps first.
+    Make sure you've followed the [installation](installation.md) steps first.
 
 !!! tip
-Want to use local models?
-If you want to do our starter tutorial using only local models, [check out this tutorial instead](starter_example_local.md).
+    Want to use local models?
+    If you want to do our starter tutorial using only local models, [check out this tutorial instead](starter_example_local.md).
 
 ## Download data
 
@@ -118,4 +118,7 @@ print(response)
 
 Now you can efficiently query to your heart's content! But this is just the beginning of what you can do with LlamaIndex.
 
-!!! tip - learn more about the [high-level concepts](./concepts.md). - tell me how to [customize things](./customization.md). - curious about a specific module? check out the guides on the left 👈
+!!! tip
+    - learn more about the [high-level concepts](./concepts.md).
+    - tell me how to [customize things](./customization.md).
+    - curious about a specific module? check out the [component guides](../module_guides/index.md).
diff --git a/docs/docs/getting_started/starter_example_local.md b/docs/docs/getting_started/starter_example_local.md
index 6b376c492729785e1ca8613c60028d3d93e29885..24a0978e58c0a17fba37356d5941c33b003a0158 100644
--- a/docs/docs/getting_started/starter_example_local.md
+++ b/docs/docs/getting_started/starter_example_local.md
@@ -1,7 +1,7 @@
 # Starter Tutorial (Local Models)
 
 !!! tip
-Make sure you've followed the [custom installation](installation.md) steps first.
+    Make sure you've followed the [custom installation](installation.md) steps first.
 
 This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-small-en-v1.5` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM.
 
@@ -70,5 +70,6 @@ This creates an engine for Q&A over your index and asks a simple question. You s
 You can view logs, persist/load the index similar to our [starter example](starter_example.md).
 
 !!! tip
-_ learn more about the [high-level concepts](./concepts.md).
-_ tell me how to [customize things](./customization.md). \* curious about a specific module? check out the guides on the left 👈
+    - learn more about the [high-level concepts](./concepts.md).
+    - tell me how to [customize things](./customization.md).
+    - curious about a specific module? check out the [component guides](../module_guides/index.md).
diff --git a/docs/docs/index.md b/docs/docs/index.md
index f42d45874f138e84ace137a14453cf3abb00eae3..78d6bbf3aa90fd034439aa9a9c08ebb7607447cb 100644
--- a/docs/docs/index.md
+++ b/docs/docs/index.md
@@ -5,7 +5,6 @@
 LlamaIndex is a data framework for [LLM](https://en.wikipedia.org/wiki/Large_language_model)-based applications which benefit from context augmentation. Such LLM systems have been termed as RAG systems, standing for "Retrieval-Augemented Generation". LlamaIndex provides the essential abstractions to more easily ingest, structure, and access private or domain-specific data in order to inject these safely and reliably into LLMs for more accurate text generation. It's available in Python (these docs) and [Typescript](https://ts.llamaindex.ai/).
 
 !!! tip
-
     Updating to LlamaIndex v0.10.0? Check out the [migration guide](./getting_started/v0_10_0_migration.md).
 
 ## 🚀 Why Context Augmentation?
diff --git a/docs/docs/module_guides/deploying/chat_engines/index.md b/docs/docs/module_guides/deploying/chat_engines/index.md
index 85cc4ca705117f6e397f34407ef8b308080b51d9..90a29a791ea806e4f8165ea892ad3cab04964e80 100644
--- a/docs/docs/module_guides/deploying/chat_engines/index.md
+++ b/docs/docs/module_guides/deploying/chat_engines/index.md
@@ -10,7 +10,7 @@ Conceptually, it is a **stateful** analogy of a [Query Engine](../query_engine/i
 By keeping track of the conversation history, it can answer questions with past context in mind.
 
 !!! tip
-If you want to ask standalone question over your data (i.e. without keeping track of conversation history), use [Query Engine](../query_engine/index.md) instead.
+    If you want to ask standalone question over your data (i.e. without keeping track of conversation history), use [Query Engine](../query_engine/index.md) instead.
 
 ## Usage Pattern
 
diff --git a/docs/docs/module_guides/deploying/chat_engines/usage_pattern.md b/docs/docs/module_guides/deploying/chat_engines/usage_pattern.md
index 7699c337d64787bf45690fbe6f3b7366fba11c7a..654c85513f7e2ebf22f882ab7584dbad7e414e4d 100644
--- a/docs/docs/module_guides/deploying/chat_engines/usage_pattern.md
+++ b/docs/docs/module_guides/deploying/chat_engines/usage_pattern.md
@@ -9,7 +9,7 @@ chat_engine = index.as_chat_engine()
 ```
 
 !!! tip
-To learn how to build an index, see [Indexing](../../indexing/index_guide.md)
+    To learn how to build an index, see [Indexing](../../indexing/index_guide.md)
 
 Have a conversation with your data:
 
diff --git a/docs/docs/module_guides/deploying/query_engine/index.md b/docs/docs/module_guides/deploying/query_engine/index.md
index 4ce3693abbb114c8a26eea80aa92580a95658f18..baad5c9fc1e138f10e9032dd52d6701b6afafc23 100644
--- a/docs/docs/module_guides/deploying/query_engine/index.md
+++ b/docs/docs/module_guides/deploying/query_engine/index.md
@@ -9,7 +9,7 @@ It is most often (but not always) built on one or many [indexes](../../indexing/
 You can compose multiple query engines to achieve more advanced capability.
 
 !!! tip
-If you want to have a conversation with your data (multiple back-and-forth instead of a single question & answer), take a look at [Chat Engine](../chat_engines/index.md)
+    If you want to have a conversation with your data (multiple back-and-forth instead of a single question & answer), take a look at [Chat Engine](../chat_engines/index.md)
 
 ## Usage Pattern
 
diff --git a/docs/docs/module_guides/deploying/query_engine/usage_pattern.md b/docs/docs/module_guides/deploying/query_engine/usage_pattern.md
index c4567aa2916d3908475e97028312ad7ccfa9b96f..26d2a4ea4bb24eebf07cad04779b578d822643cc 100644
--- a/docs/docs/module_guides/deploying/query_engine/usage_pattern.md
+++ b/docs/docs/module_guides/deploying/query_engine/usage_pattern.md
@@ -9,7 +9,7 @@ query_engine = index.as_query_engine()
 ```
 
 !!! tip
-To learn how to build an index, see [Indexing](../../indexing/index.md)
+    To learn how to build an index, see [Indexing](../../indexing/index.md)
 
 Ask a question over your data
 
diff --git a/docs/docs/module_guides/indexing/vector_store_index.md b/docs/docs/module_guides/indexing/vector_store_index.md
index 1d7e6a9459f0e5bab55461a12426855e624080f9..f0d2076e49dc2ca51f0382a9ad9bbc12d1b196cc 100644
--- a/docs/docs/module_guides/indexing/vector_store_index.md
+++ b/docs/docs/module_guides/indexing/vector_store_index.md
@@ -21,7 +21,7 @@ index = VectorStoreIndex.from_documents(documents)
 ```
 
 !!! tip
-If you are using `from_documents` on the command line, it can be convenient to pass `show_progress=True` to display a progress bar during index construction.
+    If you are using `from_documents` on the command line, it can be convenient to pass `show_progress=True` to display a progress bar during index construction.
 
 When you use `from_documents`, your Documents are split into chunks and parsed into [`Node` objects](../loading/documents_and_nodes/index.md), lightweight abstractions over text strings that keep track of metadata and relationships.
 
@@ -30,7 +30,7 @@ For more on how to load documents, see [Understanding Loading](../loading/index.
 By default, VectorStoreIndex stores everything in memory. See [Using Vector Stores](#using-vector-stores) below for more on how to use persistent vector stores.
 
 !!! tip
-By default, the `VectorStoreIndex` will generate and insert vectors in batches of 2048 nodes. If you are memory constrained (or have a surplus of memory), you can modify this by passing `insert_batch_size=2048` with your desired batch size.
+    By default, the `VectorStoreIndex` will generate and insert vectors in batches of 2048 nodes. If you are memory constrained (or have a surplus of memory), you can modify this by passing `insert_batch_size=2048` with your desired batch size.
 
     This is especially helpful when you are inserting into a remotely hosted vector database.
 
@@ -59,7 +59,7 @@ nodes = pipeline.run(documents=[Document.example()])
 ```
 
 !!! tip
-You can learn more about [how to use the ingestion pipeline](../loading/ingestion_pipeline/index.md).
+    You can learn more about [how to use the ingestion pipeline](../loading/ingestion_pipeline/index.md).
 
 ### Creating and managing nodes directly
 
diff --git a/docs/docs/module_guides/loading/connector/index.md b/docs/docs/module_guides/loading/connector/index.md
index 0c9aa02575f25c4706c67d114cd51ecc33665b38..db1628c3f26ab1e24771a7bb9a9fded18ba0db60 100644
--- a/docs/docs/module_guides/loading/connector/index.md
+++ b/docs/docs/module_guides/loading/connector/index.md
@@ -5,7 +5,7 @@
 A data connector (aka `Reader`) ingest data from different data sources and data formats into a simple `Document` representation (text and simple metadata).
 
 !!! tip
-Once you've ingested your data, you can build an [Index](../../indexing/index.md) on top, ask questions using a [Query Engine](../../deploying/query_engine/index.md), and have a conversation using a [Chat Engine](../../deploying/chat_engines/index.md).
+    Once you've ingested your data, you can build an [Index](../../indexing/index.md) on top, ask questions using a [Query Engine](../../deploying/query_engine/index.md), and have a conversation using a [Chat Engine](../../deploying/chat_engines/index.md).
 
 ## LlamaHub
 
diff --git a/docs/docs/module_guides/querying/node_postprocessors/index.md b/docs/docs/module_guides/querying/node_postprocessors/index.md
index e8c777434e635dfbc66232ec0312ba8b5851cd00..8efc9f31dfd3d4c29bc0c4327e2fa142466f535e 100644
--- a/docs/docs/module_guides/querying/node_postprocessors/index.md
+++ b/docs/docs/module_guides/querying/node_postprocessors/index.md
@@ -9,7 +9,7 @@ In LlamaIndex, node postprocessors are most commonly applied within a query engi
 LlamaIndex offers several node postprocessors for immediate use, while also providing a simple API for adding your own custom postprocessors.
 
 !!! tip
-Confused about where node postprocessor fits in the pipeline? Read about [high-level concepts](../../../getting_started/concepts.md)
+    Confused about where node postprocessor fits in the pipeline? Read about [high-level concepts](../../../getting_started/concepts.md)
 
 ## Usage Pattern
 
diff --git a/docs/docs/module_guides/querying/node_postprocessors/node_postprocessors.md b/docs/docs/module_guides/querying/node_postprocessors/node_postprocessors.md
index b1c8b5c927edde1e9b3f1194afa660932d868687..42918eb6e978a1c0bf4bf0726c56d7c8cba3e49c 100644
--- a/docs/docs/module_guides/querying/node_postprocessors/node_postprocessors.md
+++ b/docs/docs/module_guides/querying/node_postprocessors/node_postprocessors.md
@@ -76,8 +76,6 @@ postprocessor.postprocess_nodes(nodes)
 
 A full notebook guide can be found [here](../../../examples/node_postprocessor/OptimizerDemo.ipynb)
 
-(cohere_rerank)=
-
 ## CohereRerank
 
 Uses the "Cohere ReRank" functionality to re-order nodes, and returns the top N nodes.
@@ -308,7 +306,7 @@ postprocessor = RankLLMRerank(top_n=5, model="zephyr")
 postprocessor.postprocess_nodes(nodes)
 ```
 
-Full notebook guide is available [Van Gogh Wiki](/examples/node_postprocessor/rankLLM.ipynb).
+A full [notebook example is available](../../../examples/node_postprocessor/rankLLM.ipynb).
 
 ## All Notebooks
 
diff --git a/docs/docs/module_guides/querying/response_synthesizers/index.md b/docs/docs/module_guides/querying/response_synthesizers/index.md
index f4c607d84000ccd36922a31c294b41230ae23ece..928c5aff95abaf7fc0312fd66e1c8d211db28a23 100644
--- a/docs/docs/module_guides/querying/response_synthesizers/index.md
+++ b/docs/docs/module_guides/querying/response_synthesizers/index.md
@@ -9,7 +9,7 @@ The method for doing this can take many forms, from as simple as iterating over
 When used in a query engine, the response synthesizer is used after nodes are retrieved from a retriever, and after any node-postprocessors are ran.
 
 !!! tip
-Confused about where response synthesizer fits in the pipeline? Read the [high-level concepts](../../../getting_started/concepts.md)
+    Confused about where response synthesizer fits in the pipeline? Read the [high-level concepts](../../../getting_started/concepts.md)
 
 ## Usage Pattern
 
@@ -64,7 +64,7 @@ response = query_engine.query("query_text")
 ```
 
 !!! tip
-To learn how to build an index, see [Indexing](../../indexing/index.md)
+    To learn how to build an index, see [Indexing](../../indexing/index.md)
 
 ## Configuring the Response Mode
 
diff --git a/docs/docs/module_guides/querying/retriever/index.md b/docs/docs/module_guides/querying/retriever/index.md
index 21fb766e3b300975d8179a3649f79ed76ef4c9e1..6f4002b19f77e12dc75b5fcb4338ee6624d5f0e0 100644
--- a/docs/docs/module_guides/querying/retriever/index.md
+++ b/docs/docs/module_guides/querying/retriever/index.md
@@ -8,7 +8,7 @@ It can be built on top of [indexes](../../indexing/index.md), but can also be de
 It is used as a key building block in [query engines](../../deploying/query_engine/index.md) (and [Chat Engines](../../deploying/chat_engines/index.md)) for retrieving relevant context.
 
 !!! tip
-Confused about where retriever fits in the pipeline? Read about [high-level concepts](../../../getting_started/concepts.md)
+    Confused about where retriever fits in the pipeline? Read about [high-level concepts](../../../getting_started/concepts.md)
 
 ## Usage Pattern
 
diff --git a/docs/docs/module_guides/supporting_modules/settings.md b/docs/docs/module_guides/supporting_modules/settings.md
index 3f19603a255c1ff0b71df5d3bf333d8c6a7c670f..adf7991e63076ffa3f8f15d8e84d2998636e3328 100644
--- a/docs/docs/module_guides/supporting_modules/settings.md
+++ b/docs/docs/module_guides/supporting_modules/settings.md
@@ -110,7 +110,7 @@ Settings.num_output = 256
 ```
 
 !!! tip
-Learn how to configure specific modules: - [LLM](../models/llms/usage_custom.md) - [Embedding Model](../models/embeddings.md) - [Node Parser/Text Splitters](../loading/node_parsers/index.md) - [Callbacks](../observability/callbacks/index.md)
+    Learn how to configure specific modules: - [LLM](../models/llms/usage_custom.md) - [Embedding Model](../models/embeddings.md) - [Node Parser/Text Splitters](../loading/node_parsers/index.md) - [Callbacks](../observability/callbacks/index.md)
 
 ## Setting local configurations
 
diff --git a/docs/docs/understanding/index.md b/docs/docs/understanding/index.md
index e67749e901d01f73990db2aba626dea135abb2f3..4e5d4ee12f1c9569e7f567b6466e7f6517016343 100644
--- a/docs/docs/understanding/index.md
+++ b/docs/docs/understanding/index.md
@@ -5,7 +5,7 @@ Welcome to the beginning of Understanding LlamaIndex. This is a series of short,
 ## Key steps in building an LLM application
 
 !!! tip
-If you've already read our [high-level concepts](../getting_started/concepts.md) page you'll recognize several of these steps.
+    If you've already read our [high-level concepts](../getting_started/concepts.md) page you'll recognize several of these steps.
 
 There are a series of key steps involved in building any LLM-powered application, whether it's answering questions about your data, creating a chatbot, or an autonomous agent. Throughout our documentation, you'll notice sections are arranged roughly in the order you'll perform these steps while building your app. You'll learn about:
 
diff --git a/docs/docs/understanding/indexing/indexing.md b/docs/docs/understanding/indexing/indexing.md
index 51f0f4a04dd4bfc6d27fde119a31f5dd134b57b2..2c4a022fa95ff87a103eb867d929db544435c103 100644
--- a/docs/docs/understanding/indexing/indexing.md
+++ b/docs/docs/understanding/indexing/indexing.md
@@ -45,7 +45,7 @@ index = VectorStoreIndex.from_documents(documents)
 ```
 
 !!! tip
-`from_documents` also takes an optional argument `show_progress`. Set it to `True` to display a progress bar during index construction.
+    `from_documents` also takes an optional argument `show_progress`. Set it to `True` to display a progress bar during index construction.
 
 You can also choose to build an index over a list of Node objects directly:
 
diff --git a/docs/docs/understanding/querying/querying.md b/docs/docs/understanding/querying/querying.md
index 4cf6e6752e565e3c958da35d34fc29d7fea47aa1..5fb0bbdb8aa45dde964666f84cfa8fea01418b6c 100644
--- a/docs/docs/understanding/querying/querying.md
+++ b/docs/docs/understanding/querying/querying.md
@@ -27,7 +27,7 @@ However, there is more to querying than initially meets the eye. Querying consis
 - **Response synthesis** is when your query, your most-relevant data and your prompt are combined and sent to your LLM to return a response.
 
 !!! tip
-You can find out about [how to attach metadata to documents](../../module_guides/loading/documents_and_nodes/usage_documents.md) and [nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
+    You can find out about [how to attach metadata to documents](../../module_guides/loading/documents_and_nodes/usage_documents.md) and [nodes](../../module_guides/loading/documents_and_nodes/usage_nodes.md).
 
 ## Customizing the stages of querying
 
diff --git a/docs/docs/understanding/storing/storing.md b/docs/docs/understanding/storing/storing.md
index 069a2afcd577459a7c34f718a312e0d3e6ee0330..6c969568f721b247d9defebb6ae03a5c5d8ff92a 100644
--- a/docs/docs/understanding/storing/storing.md
+++ b/docs/docs/understanding/storing/storing.md
@@ -29,9 +29,7 @@ index = load_index_from_storage(storage_context)
 ```
 
 !!! tip
-Important: if you had initialized your index with a custom
-`transformations`, `embed_model`, etc., you will need to pass in the same
-options during `load_index_from_storage`, or have it set as the [global settings](../../module_guides/supporting_modules/settings.md).
+    Important: if you had initialized your index with a custom `transformations`, `embed_model`, etc., you will need to pass in the same options during `load_index_from_storage`, or have it set as the [global settings](../../module_guides/supporting_modules/settings.md).
 
 ## Using Vector Stores
 
@@ -114,7 +112,7 @@ print(response)
 ```
 
 !!! tip
-We have a [more thorough example of using Chroma](../../examples/vector_stores/ChromaIndexDemo.ipynb) if you want to go deeper on this store.
+    We have a [more thorough example of using Chroma](../../examples/vector_stores/ChromaIndexDemo.ipynb) if you want to go deeper on this store.
 
 ### You're ready to query!
 
diff --git a/docs/docs/understanding/using_llms/using_llms.md b/docs/docs/understanding/using_llms/using_llms.md
index dfdf3a16859e20a1d92c9e68397e353a2fd6496e..545ccc7c128f9940891975aef3141f8a61bbe1d6 100644
--- a/docs/docs/understanding/using_llms/using_llms.md
+++ b/docs/docs/understanding/using_llms/using_llms.md
@@ -1,7 +1,7 @@
 # Using LLMs
 
 !!! tip
-For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).
+    For a list of our supported LLMs and a comparison of their functionality, check out our [LLM module guide](../../module_guides/models/llms.md).
 
 One of the first steps when building an LLM-based application is which LLM to use; you can also use more than one if you wish.
 
@@ -39,14 +39,14 @@ index = VectorStoreIndex.from_documents(
 In this case, you've instantiated OpenAI and customized it to use the `gpt-4` model instead of the default `gpt-3.5-turbo`, and also modified the `temperature`. The `VectorStoreIndex` will now use gpt-4 to answer questions when querying.
 
 !!! tip
-The `Settings` is a bundle of configuration data that you pass into different parts of LlamaIndex. You can [learn more about Settings](../../module_guides/supporting_modules/settings.md) and how to customize it.
+    The `Settings` is a bundle of configuration data that you pass into different parts of LlamaIndex. You can [learn more about Settings](../../module_guides/supporting_modules/settings.md) and how to customize it.
 
 ## Available LLMs
 
 We support integrations with OpenAI, Hugging Face, PaLM, and more. Check out our [module guide to LLMs](../../module_guides/models/llms.md) for a full list, including how to run a local model.
 
 !!! tip
-A general note on privacy and LLMs can be found on the [privacy page](./privacy.md).
+    A general note on privacy and LLMs can be found on the [privacy page](./privacy.md).
 
 ### Using a local LLM