From bb2272b252c328c99f0a7d3209a153919a2990d5 Mon Sep 17 00:00:00 2001
From: James Briggs <james.briggs@hotmail.com>
Date: Thu, 1 Aug 2024 12:31:26 +0400
Subject: [PATCH] feat: add index page content

---
 docs/source/index.rst | 110 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 107 insertions(+), 3 deletions(-)

diff --git a/docs/source/index.rst b/docs/source/index.rst
index 1fb0f6bc..5314eaee 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -6,10 +6,114 @@
 Semantic Router documentation
 =============================
 
-Add your content using ``reStructuredText`` syntax. See the
-`reStructuredText <https://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html>`_
-documentation for details.
+Semantic Router is a superfast decision-making layer for your LLMs and agents. Rather than waiting for slow LLM generations to make tool-use decisions, we use the magic of semantic vector space to make those decisions — _routing_ our requests using _semantic_ meaning.
 
+---
+
+## Quickstart
+
+To get started with _semantic-router_ we install it like so:
+
+```
+pip install -qU semantic-router
+```
+
+❗️ _If wanting to use a fully local version of semantic router you can use `HuggingFaceEncoder` and `LlamaCppLLM` (`pip install -qU "semantic-router[local]"`, see [here](https://github.com/aurelio-labs/semantic-router/blob/main/docs/05-local-execution.ipynb)). To use the `HybridRouteLayer` you must `pip install -qU "semantic-router[hybrid]"`._
+
+We begin by defining a set of `Route` objects. These are the decision paths that the semantic router can decide to use, let's try two simple routes for now — one for talk on _politics_ and another for _chitchat_:
+
+```python
+from semantic_router import Route
+
+# we could use this as a guide for our chatbot to avoid political conversations
+politics = Route(
+    name="politics",
+    utterances=[
+        "isn't politics the best thing ever",
+        "why don't you tell me about your political opinions",
+        "don't you just love the president",
+        "they're going to destroy this country!",
+        "they will save the country!",
+    ],
+)
+
+# this could be used as an indicator to our chatbot to switch to a more
+# conversational prompt
+chitchat = Route(
+    name="chitchat",
+    utterances=[
+        "how's the weather today?",
+        "how are things going?",
+        "lovely weather today",
+        "the weather is horrendous",
+        "let's go to the chippy",
+    ],
+)
+
+# we place both of our decisions together into single list
+routes = [politics, chitchat]
+```
+
+We have our routes ready, now we initialize an embedding / encoder model. We currently support a `CohereEncoder` and `OpenAIEncoder` — more encoders will be added soon. To initialize them we do:
+
+```python
+import os
+from semantic_router.encoders import CohereEncoder, OpenAIEncoder
+
+# for Cohere
+os.environ["COHERE_API_KEY"] = "<YOUR_API_KEY>"
+encoder = CohereEncoder()
+
+# or for OpenAI
+os.environ["OPENAI_API_KEY"] = "<YOUR_API_KEY>"
+encoder = OpenAIEncoder()
+```
+
+With our `routes` and `encoder` defined we now create a `RouteLayer`. The route layer handles our semantic decision making.
+
+```python
+from semantic_router.layer import RouteLayer
+
+rl = RouteLayer(encoder=encoder, routes=routes)
+```
+
+We can now use our route layer to make super fast decisions based on user queries. Let's try with two queries that should trigger our route decisions:
+
+```python
+rl("don't you love politics?").name
+```
+
+```
+[Out]: 'politics'
+```
+
+Correct decision, let's try another:
+
+```python
+rl("how's the weather today?").name
+```
+
+```
+[Out]: 'chitchat'
+```
+
+We get both decisions correct! Now lets try sending an unrelated query:
+
+```python
+rl("I'm interested in learning about llama 2").name
+```
+
+```
+[Out]:
+```
+
+In this case, no decision could be made as we had no matches — so our route layer returned `None`!
+
+## Integrations
+
+The _encoders_ of semantic router include easy-to-use integrations with [Cohere](https://github.com/aurelio-labs/semantic-router/blob/main/semantic_router/encoders/cohere.py), [OpenAI](https://github.com/aurelio-labs/semantic-router/blob/main/docs/encoders/openai-embed-3.ipynb), [Hugging Face](https://github.com/aurelio-labs/semantic-router/blob/main/docs/encoders/huggingface.ipynb), [FastEmbed](https://github.com/aurelio-labs/semantic-router/blob/main/docs/encoders/fastembed.ipynb), and [more](https://github.com/aurelio-labs/semantic-router/tree/main/semantic_router/encoders) — we even support [multi-modality](https://github.com/aurelio-labs/semantic-router/blob/main/docs/07-multi-modal.ipynb)!.
+
+Our utterance vector space also integrates with [Pinecone](https://github.com/aurelio-labs/semantic-router/blob/main/docs/indexes/pinecone.ipynb) and [Qdrant](https://github.com/aurelio-labs/semantic-router/blob/main/docs/indexes/qdrant.ipynb)!
 
 .. toctree::
    :maxdepth: 2
-- 
GitLab