Semantic Router is a superfast decision layer for your LLMs and agents. Rather than waiting for slow LLM generations to make tool-use decisions, we use the magic of semantic vector space to make those decisions — _routing_ our requests using _semantic_ meaning.
## Quickstart
To get started with _semantic-router_ we install it like so:
```
pip install -qU semantic-router
```
We begin by defining a set of `Decision` objects. These are the decision paths that the semantic router can decide to use, let's try two simple decisions for now — one for talk on _politics_ and another for _chitchat_:
```python
fromsemantic_router.schemaimportDecision
# we could use this as a guide for our chatbot to avoid political conversations
politics=Decision(
name="politics",
utterances=[
"isn't politics the best thing ever",
"why don't you tell me about your political opinions",
"don't you just love the president"
"don't you just hate the president",
"they're going to destroy this country!",
"they will save the country!"
]
)
# this could be used as an indicator to our chatbot to switch to a more
# conversational prompt
chitchat=Decision(
name="chitchat",
utterances=[
"how's the weather today?",
"how are things going?",
"lovely weather today",
"the weather is horrendous",
"let's go to the chippy"
]
)
# we place both of our decisions together into single list
decisions=[politics,chitchat]
```
We have our decisions ready, now we initialize an embedding / encoder model. We currently support a `CohereEncoder` and `OpenAIEncoder` — more encoders will be added soon. To initialize them we do: