diff --git a/docs/getting_started/installation.md b/docs/getting_started/installation.md
index ff817d02da299150a96f0bfa9cf15ffc85c24d98..278bc875035f2b616ac451b70349dcbe1d58c3bd 100644
--- a/docs/getting_started/installation.md
+++ b/docs/getting_started/installation.md
@@ -22,10 +22,14 @@ You can also [use one of many other available LLMs](/module_guides/models/llms/u
 need additional environment keys + tokens setup depending on the LLM provider.
 ```
 
+[Check out our OpenAI Starter Example](starter_example.md)
+
 ## Local Model Setup
 
 If you don't wish to use OpenAI, consider setting up a local LLM and embedding model in the service context.
 
+[Check out our Starter Example with Local Models](starter_example_local.md)
+
 A full guide to using and configuring LLMs available [here](/module_guides/models/llms.md).
 
 A full guide to using and configuring embedding models is available [here](/module_guides/models/embeddings.md).
diff --git a/docs/getting_started/starter_example.md b/docs/getting_started/starter_example.md
index 2738a6e437278aff66a0fb99b9cb030f7a37fba1..78bee9ea90e262f519a77ba22ff4d564e070bf2a 100644
--- a/docs/getting_started/starter_example.md
+++ b/docs/getting_started/starter_example.md
@@ -4,7 +4,11 @@
 Make sure you've followed the [installation](installation.md) steps first.
 ```
 
-This is our famous "5 lines of code" starter example.
+This is our famous "5 lines of code" starter example using OpenAI.
+
+```{admonition} Want to use local models?
+If you want to do our starter tutorial using only local models, [check out this tutorial instead](starter_example_local.md).
+```
 
 ## Download data
 
diff --git a/docs/getting_started/starter_example_local.md b/docs/getting_started/starter_example_local.md
new file mode 100644
index 0000000000000000000000000000000000000000..9e2ae41ec74c75ddeeedde1538b205cc1bf47a56
--- /dev/null
+++ b/docs/getting_started/starter_example_local.md
@@ -0,0 +1,81 @@
+# Starter Tutorial (Local Models)
+
+```{tip}
+Make sure you've followed the [installation](installation.md) steps first.
+```
+
+This is our famous "5 lines of code" starter example with local LLM and embedding models. We will use `BAAI/bge-m3` as our embedding model and `Mistral-7B` served through `Ollama` as our LLM.
+
+## Download data
+
+This example uses the text of Paul Graham's essay, ["What I Worked On"](http://paulgraham.com/worked.html). This and many other examples can be found in the `examples` folder of our repo.
+
+The easiest way to get it is to [download it via this link](https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt) and save it in a folder called `data`.
+
+## Setup
+
+Ollama is a tool to help you get setup with LLMs locally (currently supported on OSX and Linux. You can install Ollama on Windows through WSL 2).
+
+Follow the [README](https://github.com/jmorganca/ollama) to learn how to install it.
+
+To load in a Mistral-7B model just do `ollama pull mistral`
+
+**NOTE**: You will need a machine with at least 32GB of RAM.
+
+## Load data and build an index
+
+In the same folder where you created the `data` folder, create a file called `starter.py` file with the following:
+
+```python
+from llama_index import VectorStoreIndex, SimpleDirectoryReader, ServiceContext
+from llama_index.embeddings import resolve_embed_model
+from llama_index.llms import Ollama
+
+documents = SimpleDirectoryReader("data").load_data()
+
+# bge-m3 embedding model
+embed_model = resolve_embed_model("local:BAAI/bge-small-en-v1.5")
+
+# ollama
+llm = Ollama(model="mistral", request_timeout=30.0)
+
+service_context = ServiceContext.from_defaults(
+    embed_model=embed_model, llm=llm
+)
+
+index = VectorStoreIndex.from_documents(
+    documents, service_context=service_context
+)
+```
+
+This builds an index over the documents in the `data` folder (which in this case just consists of the essay text, but could contain many documents).
+
+Your directory structure should look like this:
+
+<pre>
+├── starter.py
+└── data
+    └── paul_graham_essay.txt
+</pre>
+
+We use the `BAAI/bge-small-en-v1.5` model through `resolve_embed_model`, which resolves to our HuggingFaceEmbedding class. We also use our `Ollama` LLM wrapper to load in the mistral model.
+
+## Query your data
+
+Add the following lines to `starter.py`
+
+```python
+query_engine = index.as_query_engine()
+response = query_engine.query("What did the author do growing up?")
+print(response)
+```
+
+This creates an engine for Q&A over your index and asks a simple question. You should get back a response similar to the following: `The author wrote short stories and tried to program on an IBM 1401.`
+
+You can view logs, persist/load the index similar to our [starter example](/getting_started/starter_example.md).
+
+```{admonition} Next Steps
+* learn more about the [high-level concepts](/getting_started/concepts.md).
+* tell me how to [customize things](/getting_started/customization.rst).
+* curious about a specific module? check out the guides on the left 👈
+```