diff --git a/docs/08-async-dynamic-routes.ipynb b/docs/08-async-dynamic-routes.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..0cfb4e0a3baee9772b1b80565c4c73c0967fe98b
--- /dev/null
+++ b/docs/08-async-dynamic-routes.ipynb
@@ -0,0 +1,1027 @@
+{
+ "cells": [
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "UxqB7_Ieur0s"
+   },
+   "source": [
+    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/aurelio-labs/semantic-router/blob/main/docs/08-async-dynamic-routes.ipynb) [![Open nbviewer](https://raw.githubusercontent.com/pinecone-io/examples/master/assets/nbviewer-shield.svg)](https://nbviewer.org/github/aurelio-labs/semantic-router/blob/main/docs/08-async-dynamic-routes.ipynb)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "EduhQaNAur0u"
+   },
+   "source": [
+    "# Dynamic Routes"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "_4JgNeX4ur0v"
+   },
+   "source": [
+    "In semantic-router there are two types of routes that can be chosen. Both routes belong to the `Route` object, the only difference between them is that _static_ routes return a `Route.name` when chosen, whereas _dynamic_ routes use an LLM call to produce parameter input values.\n",
+    "\n",
+    "For example, a _static_ route will tell us if a query is talking about mathematics by returning the route name (which could be `\"math\"` for example). A _dynamic_ route does the same thing, but it also extracts key information from the input utterance to be used in a function associated with that route.\n",
+    "\n",
+    "For example we could provide a dynamic route with associated utterances:\n",
+    "\n",
+    "```\n",
+    "\"what is x to the power of y?\"\n",
+    "\"what is 9 to the power of 4?\"\n",
+    "\"calculate the result of base x and exponent y\"\n",
+    "\"calculate the result of base 10 and exponent 3\"\n",
+    "\"return x to the power of y\"\n",
+    "```\n",
+    "\n",
+    "and we could also provide the route with a schema outlining key features of the function:\n",
+    "\n",
+    "```\n",
+    "def power(base: float, exponent: float) -> float:\n",
+    "    \"\"\"Raise base to the power of exponent.\n",
+    "\n",
+    "    Args:\n",
+    "        base (float): The base number.\n",
+    "        exponent (float): The exponent to which the base is raised.\n",
+    "\n",
+    "    Returns:\n",
+    "        float: The result of base raised to the power of exponent.\n",
+    "    \"\"\"\n",
+    "    return base ** exponent\n",
+    "```\n",
+    "\n",
+    "Then, if the users input utterance is \"What is 2 to the power of 3?\", the route will be triggered, as the input utterance is semantically similar to the route utterances. Furthermore, the route utilizes an LLM to identify that `base=2` and `expoenent=3`. These values are returned in such a way that they can be used in the above `power` function. That is, the dynamic router automates the process of calling relevant functions from natural language inputs.\n",
+    "\n",
+    "***⚠️ Note: We have a fully local version of dynamic routes available at [docs/05-local-execution.ipynb](https://github.com/aurelio-labs/semantic-router/blob/main/docs/05-local-execution.ipynb). The local 05 version tends to outperform the OpenAI version we demo in this notebook, so we'd recommend trying [05](https://github.com/aurelio-labs/semantic-router/blob/main/docs/05-local-execution.ipynb)!***"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "bbmw8CO4ur0v"
+   },
+   "source": [
+    "## Installing the Library"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": null,
+   "metadata": {
+    "id": "dLElfRhgur0v",
+    "outputId": "da0e506e-24cf-43da-9243-894a7c4955db"
+   },
+   "outputs": [],
+   "source": [
+    "!pip install -qU \\\n",
+    "    \"semantic-router==0.0.54\" \\\n",
+    "    tzdata"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "BixZd6Eour0w"
+   },
+   "source": [
+    "## Initializing Routes and RouteLayer"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "PxnW9qBvur0x"
+   },
+   "source": [
+    "Dynamic routes are treated in the same way as static routes, let's begin by initializing a `RouteLayer` consisting of static routes."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 1,
+   "metadata": {
+    "id": "kc9Ty6Lgur0x",
+    "outputId": "f32e3a25-c073-4802-ced3-d7a5663670c1"
+   },
+   "outputs": [],
+   "source": [
+    "from semantic_router import Route\n",
+    "\n",
+    "politics = Route(\n",
+    "    name=\"politics\",\n",
+    "    utterances=[\n",
+    "        \"isn't politics the best thing ever\",\n",
+    "        \"why don't you tell me about your political opinions\",\n",
+    "        \"don't you just love the president\" \"don't you just hate the president\",\n",
+    "        \"they're going to destroy this country!\",\n",
+    "        \"they will save the country!\",\n",
+    "    ],\n",
+    ")\n",
+    "chitchat = Route(\n",
+    "    name=\"chitchat\",\n",
+    "    utterances=[\n",
+    "        \"how's the weather today?\",\n",
+    "        \"how are things going?\",\n",
+    "        \"lovely weather today\",\n",
+    "        \"the weather is horrendous\",\n",
+    "        \"let's go to the chippy\",\n",
+    "    ],\n",
+    ")\n",
+    "\n",
+    "routes = [politics, chitchat]"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "voWyqmffur0x"
+   },
+   "source": [
+    "We initialize our `RouteLayer` with our `encoder` and `routes`. We can use popular encoder APIs like `CohereEncoder` and `OpenAIEncoder`, or local alternatives like `FastEmbedEncoder`."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 2,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "BI9AiDspur0y",
+    "outputId": "27329a54-3f16-44a5-ac20-13a6b26afb97"
+   },
+   "outputs": [],
+   "source": [
+    "import os\n",
+    "from getpass import getpass\n",
+    "from semantic_router import RouteLayer\n",
+    "from semantic_router.encoders import CohereEncoder, OpenAIEncoder\n",
+    "\n",
+    "\n",
+    "os.environ[\"OPENAI_API_KEY\"] = os.getenv(\"OPENAI_API_KEY\") or getpass(\n",
+    "    \"Enter OpenAI API Key: \"\n",
+    ")\n",
+    "\n",
+    "# encoder = CohereEncoder()\n",
+    "encoder = OpenAIEncoder()\n",
+    "\n",
+    "rl = RouteLayer(encoder=encoder, routes=routes)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "GuLCeIS5ur0y"
+   },
+   "source": [
+    "We run the solely static routes layer:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 3,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='chitchat', function_call=None, similarity_score=None)"
+      ]
+     },
+     "execution_count": 3,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "await rl.acall(\"how's the weather today?\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "McbLKO26ur0y"
+   },
+   "source": [
+    "## Creating a Dynamic Route"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "ANAoEjxYur0y"
+   },
+   "source": [
+    "As with static routes, we must create a dynamic route before adding it to our route layer. To make a route dynamic, we need to provide the `function_schemas` as a list. Each function schema provides instructions on what a function is, so that an LLM can decide how to use it correctly."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 4,
+   "metadata": {
+    "id": "5jaF1Xa5ur0y"
+   },
+   "outputs": [],
+   "source": [
+    "from datetime import datetime\n",
+    "from zoneinfo import ZoneInfo\n",
+    "\n",
+    "\n",
+    "def get_time(timezone: str) -> str:\n",
+    "    \"\"\"Finds the current time in a specific timezone.\n",
+    "\n",
+    "    :param timezone: The timezone to find the current time in, should\n",
+    "        be a valid timezone from the IANA Time Zone Database like\n",
+    "        \"America/New_York\" or \"Europe/London\". Do NOT put the place\n",
+    "        name itself like \"rome\", or \"new york\", you must provide\n",
+    "        the IANA format.\n",
+    "    :type timezone: str\n",
+    "    :return: The current time in the specified timezone.\"\"\"\n",
+    "    now = datetime.now(ZoneInfo(timezone))\n",
+    "    return now.strftime(\"%H:%M\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 5,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/",
+     "height": 35
+    },
+    "id": "YyFKV8jMur0z",
+    "outputId": "29cf80f4-552c-47bb-fbf9-019f5dfdf00a"
+   },
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "'02:44'"
+      ]
+     },
+     "execution_count": 5,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "get_time(\"America/New_York\")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "4qyaRuNXur0z"
+   },
+   "source": [
+    "To get the function schema we can use the `get_schema` function from the `function_call` module."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 6,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "tOjuhp5Xur0z",
+    "outputId": "ca88a3ea-d70a-4950-be9a-63fab699de3b"
+   },
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[{'type': 'function',\n",
+       "  'function': {'name': 'get_time',\n",
+       "   'description': 'Finds the current time in a specific timezone.\\n\\n:param timezone: The timezone to find the current time in, should\\n    be a valid timezone from the IANA Time Zone Database like\\n    \"America/New_York\" or \"Europe/London\". Do NOT put the place\\n    name itself like \"rome\", or \"new york\", you must provide\\n    the IANA format.\\n:type timezone: str\\n:return: The current time in the specified timezone.',\n",
+       "   'parameters': {'type': 'object',\n",
+       "    'properties': {'timezone': {'type': 'string',\n",
+       "      'description': 'The timezone to find the current time in, should\\n    be a valid timezone from the IANA Time Zone Database like\\n    \"America/New_York\" or \"Europe/London\". Do NOT put the place\\n    name itself like \"rome\", or \"new york\", you must provide\\n    the IANA format.'}},\n",
+       "    'required': ['timezone']}}}]"
+      ]
+     },
+     "execution_count": 6,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "from semantic_router.llms.openai import get_schemas_openai\n",
+    "\n",
+    "schemas = get_schemas_openai([get_time])\n",
+    "schemas"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "HcF7jGjAur0z"
+   },
+   "source": [
+    "We use this to define our dynamic route:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 7,
+   "metadata": {
+    "id": "iesBG9P3ur0z"
+   },
+   "outputs": [],
+   "source": [
+    "time_route = Route(\n",
+    "    name=\"get_time\",\n",
+    "    utterances=[\n",
+    "        \"what is the time in new york city?\",\n",
+    "        \"what is the time in london?\",\n",
+    "        \"I live in Rome, what time is it?\",\n",
+    "    ],\n",
+    "    function_schemas=schemas,\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "ZiUs3ovpur0z"
+   },
+   "source": [
+    "Add the new route to our `layer`:"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 8,
+   "metadata": {
+    "colab": {
+     "base_uri": "https://localhost:8080/"
+    },
+    "id": "-0vY8PRXur0z",
+    "outputId": "db01e14c-eab3-4f93-f4c2-e30f508c8b5d"
+   },
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[32m2024-07-19 14:44:29 INFO semantic_router.utils.logger Adding `get_time` route\u001b[0m\n"
+     ]
+    }
+   ],
+   "source": [
+    "rl.add(time_route)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "7yoE0IrNur0z"
+   },
+   "source": [
+    "Now we can ask our layer a time related question to trigger our new dynamic route."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 9,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[33m2024-07-19 14:44:32 WARNING semantic_router.utils.logger No LLM provided for dynamic route, will use OpenAI LLM default\u001b[0m\n",
+      "\u001b[32m2024-07-19 14:44:34 INFO semantic_router.utils.logger OpenAI => Function Inputs: [{'function_name': 'get_time', 'arguments': {'timezone': 'America/New_York'}}]\u001b[0m\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='get_time', function_call=[{'function_name': 'get_time', 'arguments': {'timezone': 'America/New_York'}}], similarity_score=None)"
+      ]
+     },
+     "execution_count": 9,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl.acall(\"what is the time in new york city?\")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 10,
+   "metadata": {
+    "id": "xvdyUPKqg9hr",
+    "outputId": "4161e7e0-ab6d-4e76-f068-2d66728305ff"
+   },
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "02:44\n"
+     ]
+    }
+   ],
+   "source": [
+    "for call in response.function_call:\n",
+    "    if call[\"function_name\"] == \"get_time\":\n",
+    "        args = call[\"arguments\"]\n",
+    "        result = get_time(**args)\n",
+    "print(result)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "Qt0vkq2Xur00"
+   },
+   "source": [
+    "Our dynamic route provides both the route itself _and_ the input parameters required to use the route."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "jToYBo8Ug9hr"
+   },
+   "source": [
+    "## Dynamic Routes with Multiple Functions"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "J0oD1dxIur00"
+   },
+   "source": [
+    "---"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "vEkTpoVAg9hr"
+   },
+   "source": [
+    "Routes can be assigned multiple functions. Then, when that particular Route is selected by the Route Layer, a number of those functions might be invoked due to the users utterance containing relevant information that fits their arguments."
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "BHUlB3org9hs"
+   },
+   "source": [
+    "Let's define a Route that has multiple functions."
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 11,
+   "metadata": {
+    "id": "dtrksov0g9hs"
+   },
+   "outputs": [],
+   "source": [
+    "from datetime import datetime, timedelta\n",
+    "from zoneinfo import ZoneInfo\n",
+    "\n",
+    "\n",
+    "# Function with one argument\n",
+    "def get_time(timezone: str) -> str:\n",
+    "    \"\"\"Finds the current time in a specific timezone.\n",
+    "\n",
+    "    :param timezone: The timezone to find the current time in, should\n",
+    "        be a valid timezone from the IANA Time Zone Database like\n",
+    "        \"America/New_York\" or \"Europe/London\". Do NOT put the place\n",
+    "        name itself like \"rome\", or \"new york\", you must provide\n",
+    "        the IANA format.\n",
+    "    :type timezone: str\n",
+    "    :return: The current time in the specified timezone.\"\"\"\n",
+    "    now = datetime.now(ZoneInfo(timezone))\n",
+    "    return now.strftime(\"%H:%M\")\n",
+    "\n",
+    "\n",
+    "def get_time_difference(timezone1: str, timezone2: str) -> str:\n",
+    "    \"\"\"Calculates the time difference between two timezones.\n",
+    "    :param timezone1: The first timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".\n",
+    "    :param timezone2: The second timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".\n",
+    "    :type timezone1: str\n",
+    "    :type timezone2: str\n",
+    "    :return: The time difference in hours between the two timezones.\"\"\"\n",
+    "    # Get the current time in UTC\n",
+    "    now_utc = datetime.utcnow().replace(tzinfo=ZoneInfo(\"UTC\"))\n",
+    "\n",
+    "    # Convert the UTC time to the specified timezones\n",
+    "    tz1_time = now_utc.astimezone(ZoneInfo(timezone1))\n",
+    "    tz2_time = now_utc.astimezone(ZoneInfo(timezone2))\n",
+    "\n",
+    "    # Calculate the difference in offsets from UTC\n",
+    "    tz1_offset = tz1_time.utcoffset().total_seconds()\n",
+    "    tz2_offset = tz2_time.utcoffset().total_seconds()\n",
+    "\n",
+    "    # Calculate the difference in hours\n",
+    "    hours_difference = (tz2_offset - tz1_offset) / 3600\n",
+    "\n",
+    "    return f\"The time difference between {timezone1} and {timezone2} is {hours_difference} hours.\"\n",
+    "\n",
+    "\n",
+    "# Function with three arguments\n",
+    "def convert_time(time: str, from_timezone: str, to_timezone: str) -> str:\n",
+    "    \"\"\"Converts a specific time from one timezone to another.\n",
+    "    :param time: The time to convert in HH:MM format.\n",
+    "    :param from_timezone: The original timezone of the time, should be a valid IANA timezone.\n",
+    "    :param to_timezone: The target timezone for the time, should be a valid IANA timezone.\n",
+    "    :type time: str\n",
+    "    :type from_timezone: str\n",
+    "    :type to_timezone: str\n",
+    "    :return: The converted time in the target timezone.\n",
+    "    :raises ValueError: If the time format or timezone strings are invalid.\n",
+    "\n",
+    "    Example:\n",
+    "        convert_time(\"12:30\", \"America/New_York\", \"Asia/Tokyo\") -> \"03:30\"\n",
+    "    \"\"\"\n",
+    "    try:\n",
+    "        # Use today's date to avoid historical timezone issues\n",
+    "        today = datetime.now().date()\n",
+    "        datetime_string = f\"{today} {time}\"\n",
+    "        time_obj = datetime.strptime(datetime_string, \"%Y-%m-%d %H:%M\").replace(\n",
+    "            tzinfo=ZoneInfo(from_timezone)\n",
+    "        )\n",
+    "\n",
+    "        converted_time = time_obj.astimezone(ZoneInfo(to_timezone))\n",
+    "\n",
+    "        formatted_time = converted_time.strftime(\"%H:%M\")\n",
+    "        return formatted_time\n",
+    "    except Exception as e:\n",
+    "        raise ValueError(f\"Error converting time: {e}\")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 12,
+   "metadata": {
+    "id": "AjoYy7mFg9hs"
+   },
+   "outputs": [],
+   "source": [
+    "functions = [get_time, get_time_difference, convert_time]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 13,
+   "metadata": {
+    "id": "DoOkXV2Tg9hs",
+    "outputId": "f1e0fe08-b6ed-4f50-d845-5c54832ca677"
+   },
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "[{'type': 'function',\n",
+       "  'function': {'name': 'get_time',\n",
+       "   'description': 'Finds the current time in a specific timezone.\\n\\n:param timezone: The timezone to find the current time in, should\\n    be a valid timezone from the IANA Time Zone Database like\\n    \"America/New_York\" or \"Europe/London\". Do NOT put the place\\n    name itself like \"rome\", or \"new york\", you must provide\\n    the IANA format.\\n:type timezone: str\\n:return: The current time in the specified timezone.',\n",
+       "   'parameters': {'type': 'object',\n",
+       "    'properties': {'timezone': {'type': 'string',\n",
+       "      'description': 'The timezone to find the current time in, should\\n    be a valid timezone from the IANA Time Zone Database like\\n    \"America/New_York\" or \"Europe/London\". Do NOT put the place\\n    name itself like \"rome\", or \"new york\", you must provide\\n    the IANA format.'}},\n",
+       "    'required': ['timezone']}}},\n",
+       " {'type': 'function',\n",
+       "  'function': {'name': 'get_time_difference',\n",
+       "   'description': 'Calculates the time difference between two timezones.\\n:param timezone1: The first timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".\\n:param timezone2: The second timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".\\n:type timezone1: str\\n:type timezone2: str\\n:return: The time difference in hours between the two timezones.',\n",
+       "   'parameters': {'type': 'object',\n",
+       "    'properties': {'timezone1': {'type': 'string',\n",
+       "      'description': 'The first timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".'},\n",
+       "     'timezone2': {'type': 'string',\n",
+       "      'description': 'The second timezone, should be a valid timezone from the IANA Time Zone Database like \"America/New_York\" or \"Europe/London\".'}},\n",
+       "    'required': ['timezone1', 'timezone2']}}},\n",
+       " {'type': 'function',\n",
+       "  'function': {'name': 'convert_time',\n",
+       "   'description': 'Converts a specific time from one timezone to another.\\n:param time: The time to convert in HH:MM format.\\n:param from_timezone: The original timezone of the time, should be a valid IANA timezone.\\n:param to_timezone: The target timezone for the time, should be a valid IANA timezone.\\n:type time: str\\n:type from_timezone: str\\n:type to_timezone: str\\n:return: The converted time in the target timezone.\\n:raises ValueError: If the time format or timezone strings are invalid.\\n\\nExample:\\n    convert_time(\"12:30\", \"America/New_York\", \"Asia/Tokyo\") -> \"03:30\"',\n",
+       "   'parameters': {'type': 'object',\n",
+       "    'properties': {'time': {'type': 'string',\n",
+       "      'description': 'The time to convert in HH:MM format.'},\n",
+       "     'from_timezone': {'type': 'string',\n",
+       "      'description': 'The original timezone of the time, should be a valid IANA timezone.'},\n",
+       "     'to_timezone': {'type': 'string',\n",
+       "      'description': 'The target timezone for the time, should be a valid IANA timezone.'}},\n",
+       "    'required': ['time', 'from_timezone', 'to_timezone']}}}]"
+      ]
+     },
+     "execution_count": 13,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "# Generate schemas for all functions\n",
+    "from semantic_router.llms.openai import get_schemas_openai\n",
+    "\n",
+    "schemas = get_schemas_openai(functions)\n",
+    "schemas"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 14,
+   "metadata": {
+    "id": "YBRHxhnkg9hs"
+   },
+   "outputs": [],
+   "source": [
+    "# Define the dynamic route with multiple functions\n",
+    "multi_function_route = Route(\n",
+    "    name=\"timezone_management\",\n",
+    "    utterances=[\n",
+    "        # Utterances for get_time function\n",
+    "        \"what is the time in New York?\",\n",
+    "        \"current time in Berlin?\",\n",
+    "        \"tell me the time in Moscow right now\",\n",
+    "        \"can you show me the current time in Tokyo?\",\n",
+    "        \"please provide the current time in London\",\n",
+    "        # Utterances for get_time_difference function\n",
+    "        \"how many hours ahead is Tokyo from London?\",\n",
+    "        \"time difference between Sydney and Cairo\",\n",
+    "        \"what's the time gap between Los Angeles and New York?\",\n",
+    "        \"how much time difference is there between Paris and Sydney?\",\n",
+    "        \"calculate the time difference between Dubai and Toronto\",\n",
+    "        # Utterances for convert_time function\n",
+    "        \"convert 15:00 from New York time to Berlin time\",\n",
+    "        \"change 09:00 from Paris time to Moscow time\",\n",
+    "        \"adjust 20:00 from Rome time to London time\",\n",
+    "        \"convert 12:00 from Madrid time to Chicago time\",\n",
+    "        \"change 18:00 from Beijing time to Los Angeles time\"\n",
+    "        # All three functions\n",
+    "        \"What is the time in Seattle? What is the time difference between Mumbai and Tokyo? What is 5:53 Toronto time in Sydney time?\",\n",
+    "    ],\n",
+    "    function_schemas=schemas,\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 15,
+   "metadata": {
+    "id": "yEbQadQbg9ht"
+   },
+   "outputs": [],
+   "source": [
+    "routes = [politics, chitchat, multi_function_route]"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 16,
+   "metadata": {
+    "id": "C0aYIXaog9ht",
+    "outputId": "74114a86-4a6f-49c5-8e2e-600f577d63f5"
+   },
+   "outputs": [],
+   "source": [
+    "rl2 = RouteLayer(encoder=encoder, routes=routes)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "cG98YLZ5g9ht"
+   },
+   "source": [
+    "### Function to Parse Route Layer Responses"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 17,
+   "metadata": {
+    "id": "PJR97klVg9ht"
+   },
+   "outputs": [],
+   "source": [
+    "def parse_response(response: str):\n",
+    "\n",
+    "    for call in response.function_call:\n",
+    "        args = call[\"arguments\"]\n",
+    "        if call[\"function_name\"] == \"get_time\":\n",
+    "            result = get_time(**args)\n",
+    "            print(result)\n",
+    "        if call[\"function_name\"] == \"get_time_difference\":\n",
+    "            result = get_time_difference(**args)\n",
+    "            print(result)\n",
+    "        if call[\"function_name\"] == \"convert_time\":\n",
+    "            result = convert_time(**args)\n",
+    "            print(result)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "OUbPbxZKg9ht"
+   },
+   "source": [
+    "### Checking that Politics Non-Dynamic Route Still Works"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 18,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='politics', function_call=None, similarity_score=None)"
+      ]
+     },
+     "execution_count": 18,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\"What is your political leaning?\")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "ZHgw8QoWg9ht"
+   },
+   "source": [
+    "### Checking that Chitchat Non-Dynamic Route Still Works"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 19,
+   "metadata": {},
+   "outputs": [
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='chitchat', function_call=None, similarity_score=None)"
+      ]
+     },
+     "execution_count": 19,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\"Hello bot, how are you today?\")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "uZDiY787g9hu"
+   },
+   "source": [
+    "### Testing the `multi_function_route` - The `get_time` Function"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 20,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[33m2024-07-19 14:45:02 WARNING semantic_router.utils.logger No LLM provided for dynamic route, will use OpenAI LLM default\u001b[0m\n",
+      "\u001b[32m2024-07-19 14:45:03 INFO semantic_router.utils.logger OpenAI => Function Inputs: [{'function_name': 'get_time', 'arguments': {'timezone': 'America/New_York'}}]\u001b[0m\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='timezone_management', function_call=[{'function_name': 'get_time', 'arguments': {'timezone': 'America/New_York'}}], similarity_score=None)"
+      ]
+     },
+     "execution_count": 20,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\"what is the time in New York?\")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 21,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "02:45\n"
+     ]
+    }
+   ],
+   "source": [
+    "parse_response(response)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "wcjQ4Dbpg9hu"
+   },
+   "source": [
+    "### Testing the `multi_function_route` - The `get_time_difference` Function"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 22,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[32m2024-07-19 14:45:07 INFO semantic_router.utils.logger OpenAI => Function Inputs: [{'function_name': 'get_time_difference', 'arguments': {'timezone1': 'America/Los_Angeles', 'timezone2': 'Europe/Istanbul'}}]\u001b[0m\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='timezone_management', function_call=[{'function_name': 'get_time_difference', 'arguments': {'timezone1': 'America/Los_Angeles', 'timezone2': 'Europe/Istanbul'}}], similarity_score=None)"
+      ]
+     },
+     "execution_count": 22,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\n",
+    "    \"What is the time difference between Los Angeles and Istanbul?\"\n",
+    ")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 23,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "The time difference between America/Los_Angeles and Europe/Istanbul is 10.0 hours.\n"
+     ]
+    }
+   ],
+   "source": [
+    "parse_response(response)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "14qz-ApLg9hv"
+   },
+   "source": [
+    "### Testing the `multi_function_route` - The `convert_time` Function"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 24,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[32m2024-07-19 14:45:10 INFO semantic_router.utils.logger OpenAI => Function Inputs: [{'function_name': 'convert_time', 'arguments': {'time': '23:02', 'from_timezone': 'Asia/Dubai', 'to_timezone': 'Asia/Tokyo'}}]\u001b[0m\n"
+     ]
+    },
+    {
+     "data": {
+      "text/plain": [
+       "RouteChoice(name='timezone_management', function_call=[{'function_name': 'convert_time', 'arguments': {'time': '23:02', 'from_timezone': 'Asia/Dubai', 'to_timezone': 'Asia/Tokyo'}}], similarity_score=None)"
+      ]
+     },
+     "execution_count": 24,
+     "metadata": {},
+     "output_type": "execute_result"
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\n",
+    "    \"What is 23:02 Dubai time in Tokyo time? Please and thank you.\"\n",
+    ")\n",
+    "response"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 25,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "04:02\n"
+     ]
+    }
+   ],
+   "source": [
+    "parse_response(response)"
+   ]
+  },
+  {
+   "cell_type": "markdown",
+   "metadata": {
+    "id": "TSRfC6JJg9hv"
+   },
+   "source": [
+    "### The Cool Bit - Testing `multi_function_route` - Multiple Functions at Once"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 26,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stderr",
+     "output_type": "stream",
+     "text": [
+      "\u001b[32m2024-07-19 14:45:15 INFO semantic_router.utils.logger OpenAI => Function Inputs: [{'function_name': 'get_time', 'arguments': {'timezone': 'Europe/Prague'}}, {'function_name': 'get_time_difference', 'arguments': {'timezone1': 'Europe/Berlin', 'timezone2': 'Asia/Shanghai'}}, {'function_name': 'convert_time', 'arguments': {'time': '05:53', 'from_timezone': 'Europe/Lisbon', 'to_timezone': 'Asia/Bangkok'}}]\u001b[0m\n"
+     ]
+    }
+   ],
+   "source": [
+    "response = await rl2.acall(\n",
+    "    \"\"\"\n",
+    "    What is the time in Prague?\n",
+    "    What is the time difference between Frankfurt and Beijing?\n",
+    "    What is 5:53 Lisbon time in Bangkok time?\n",
+    "\"\"\"\n",
+    ")"
+   ]
+  },
+  {
+   "cell_type": "code",
+   "execution_count": 27,
+   "metadata": {},
+   "outputs": [
+    {
+     "name": "stdout",
+     "output_type": "stream",
+     "text": [
+      "08:45\n",
+      "The time difference between Europe/Berlin and Asia/Shanghai is 6.0 hours.\n",
+      "11:53\n"
+     ]
+    }
+   ],
+   "source": [
+    "parse_response(response)"
+   ]
+  }
+ ],
+ "metadata": {
+  "colab": {
+   "provenance": []
+  },
+  "kernelspec": {
+   "display_name": "decision-layer",
+   "language": "python",
+   "name": "python3"
+  },
+  "language_info": {
+   "codemirror_mode": {
+    "name": "ipython",
+    "version": 3
+   },
+   "file_extension": ".py",
+   "mimetype": "text/x-python",
+   "name": "python",
+   "nbconvert_exporter": "python",
+   "pygments_lexer": "ipython3",
+   "version": "3.11.5"
+  }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}
diff --git a/pyproject.toml b/pyproject.toml
index f78ed13fc5a435d8c391228770e2fdefa3635240..1ecd207d01171d23ea03f8ffd974a020cead72a1 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
 [tool.poetry]
 name = "semantic-router"
-version = "0.0.53"
+version = "0.0.54"
 description = "Super fast semantic router for AI decision making"
 authors = [
     "James Briggs <james@aurelio.ai>",
diff --git a/semantic_router/__init__.py b/semantic_router/__init__.py
index 19d5381ff51a0c688cbd0bf49c3c4c5f9242ac9b..bd7f093236b6f45cd64bf2d3cd17b31d9ae92158 100644
--- a/semantic_router/__init__.py
+++ b/semantic_router/__init__.py
@@ -4,4 +4,4 @@ from semantic_router.route import Route
 
 __all__ = ["RouteLayer", "HybridRouteLayer", "Route", "LayerConfig"]
 
-__version__ = "0.0.53"
+__version__ = "0.0.54"
diff --git a/semantic_router/layer.py b/semantic_router/layer.py
index 61824033d7b0fe69919e9d577c309b3fa85cabe5..7f80945e4085f2e5f28263ff1e4cb8b5838e3fed 100644
--- a/semantic_router/layer.py
+++ b/semantic_router/layer.py
@@ -309,10 +309,15 @@ class RouteLayer:
                     "Route has a function schema, but no text was provided."
                 )
             if route.function_schemas and not isinstance(route.llm, BaseLLM):
-                raise NotImplementedError(
-                    "Dynamic routes not yet supported for async calls."
-                )
-            return route(text)
+                if not self.llm:
+                    logger.warning(
+                        "No LLM provided for dynamic route, will use OpenAI LLM default"
+                    )
+                    self.llm = OpenAILLM()
+                    route.llm = self.llm
+                else:
+                    route.llm = self.llm
+            return await route.acall(text)
         elif passed and route is not None and simulate_static:
             return RouteChoice(
                 name=route.name,
diff --git a/semantic_router/llms/openai.py b/semantic_router/llms/openai.py
index 2a5311958f91a071212b6b14199205cddea3f01a..f22f409e8c59e830d9c1f20059c4be8d3bc10a27 100644
--- a/semantic_router/llms/openai.py
+++ b/semantic_router/llms/openai.py
@@ -22,6 +22,7 @@ from openai.types.chat.chat_completion_message_tool_call import (
 
 class OpenAILLM(BaseLLM):
     client: Optional[openai.OpenAI]
+    async_client: Optional[openai.AsyncOpenAI]
     temperature: Optional[float]
     max_tokens: Optional[int]
 
@@ -39,6 +40,7 @@ class OpenAILLM(BaseLLM):
         if api_key is None:
             raise ValueError("OpenAI API key cannot be 'None'.")
         try:
+            self.async_client = openai.AsyncOpenAI(api_key=api_key)
             self.client = openai.OpenAI(api_key=api_key)
         except Exception as e:
             raise ValueError(
@@ -64,6 +66,23 @@ class OpenAILLM(BaseLLM):
             )
         return tool_calls_info
 
+    async def async_extract_tool_calls_info(
+        self, tool_calls: List[ChatCompletionMessageToolCall]
+    ) -> List[Dict[str, Any]]:
+        tool_calls_info = []
+        for tool_call in tool_calls:
+            if tool_call.function.arguments is None:
+                raise ValueError(
+                    "Invalid output, expected arguments to be specified for each tool call."
+                )
+            tool_calls_info.append(
+                {
+                    "function_name": tool_call.function.name,
+                    "arguments": json.loads(tool_call.function.arguments),
+                }
+            )
+        return tool_calls_info
+
     def __call__(
         self,
         messages: List[Message],
@@ -108,6 +127,50 @@ class OpenAILLM(BaseLLM):
             logger.error(f"LLM error: {e}")
             raise Exception(f"LLM error: {e}") from e
 
+    async def acall(
+        self,
+        messages: List[Message],
+        function_schemas: Optional[List[Dict[str, Any]]] = None,
+    ) -> str:
+        if self.async_client is None:
+            raise ValueError("OpenAI async_client is not initialized.")
+        try:
+            tools: Union[List[Dict[str, Any]], NotGiven] = (
+                function_schemas if function_schemas is not None else NOT_GIVEN
+            )
+
+            completion = await self.async_client.chat.completions.create(
+                model=self.name,
+                messages=[m.to_openai() for m in messages],
+                temperature=self.temperature,
+                max_tokens=self.max_tokens,
+                tools=tools,  # type: ignore # We pass a list of dicts which get interpreted as Iterable[ChatCompletionToolParam].
+            )
+
+            if function_schemas:
+                tool_calls = completion.choices[0].message.tool_calls
+                if tool_calls is None:
+                    raise ValueError("Invalid output, expected a tool call.")
+                if len(tool_calls) < 1:
+                    raise ValueError(
+                        "Invalid output, expected at least one tool to be specified."
+                    )
+
+                # Collecting multiple tool calls information
+                output = str(
+                    await self.async_extract_tool_calls_info(tool_calls)
+                )  # str in keeping with base type.
+            else:
+                content = completion.choices[0].message.content
+                if content is None:
+                    raise ValueError("Invalid output, expected content.")
+                output = content
+            return output
+
+        except Exception as e:
+            logger.error(f"LLM error: {e}")
+            raise Exception(f"LLM error: {e}") from e
+
     def extract_function_inputs(
         self, query: str, function_schemas: List[Dict[str, Any]]
     ) -> List[Dict[str, Any]]:
@@ -122,6 +185,25 @@ class OpenAILLM(BaseLLM):
         output = output.replace("'", '"')
         function_inputs = json.loads(output)
         logger.info(f"Function inputs: {function_inputs}")
+        logger.info(f"function_schemas: {function_schemas}")
+        if not self._is_valid_inputs(function_inputs, function_schemas):
+            raise ValueError("Invalid inputs")
+        return function_inputs
+
+    async def async_extract_function_inputs(
+        self, query: str, function_schemas: List[Dict[str, Any]]
+    ) -> List[Dict[str, Any]]:
+        system_prompt = "You are an intelligent AI. Given a command or request from the user, call the function to complete the request."
+        messages = [
+            Message(role="system", content=system_prompt),
+            Message(role="user", content=query),
+        ]
+        output = await self.acall(messages=messages, function_schemas=function_schemas)
+        if not output:
+            raise Exception("No output generated for extract function input")
+        output = output.replace("'", '"')
+        function_inputs = json.loads(output)
+        logger.info(f"OpenAI => Function Inputs: {function_inputs}")
         if not self._is_valid_inputs(function_inputs, function_schemas):
             raise ValueError("Invalid inputs")
         return function_inputs
diff --git a/semantic_router/route.py b/semantic_router/route.py
index a32c778c8506217f542060683bb498989b560c67..3fc3f0407e2c7bda06c627bcedeb4bae48e59ac9 100644
--- a/semantic_router/route.py
+++ b/semantic_router/route.py
@@ -76,6 +76,28 @@ class Route(BaseModel):
             func_call = None
         return RouteChoice(name=self.name, function_call=func_call)
 
+    async def acall(self, query: Optional[str] = None) -> RouteChoice:
+        if self.function_schemas:
+            if not self.llm:
+                raise ValueError(
+                    "LLM is required for dynamic routes. Please ensure the `llm` "
+                    "attribute is set."
+                )
+            elif query is None:
+                raise ValueError(
+                    "Query is required for dynamic routes. Please ensure the `query` "
+                    "argument is passed."
+                )
+            # if a function schema is provided we generate the inputs
+            extracted_inputs = await self.llm.async_extract_function_inputs(  # type: ignore # openai-llm
+                query=query, function_schemas=self.function_schemas
+            )
+            func_call = extracted_inputs
+        else:
+            # otherwise we just pass None for the call
+            func_call = None
+        return RouteChoice(name=self.name, function_call=func_call)
+
     # def to_dict(self) -> Dict[str, Any]:
     #     return self.dict()
 
diff --git a/semantic_router/utils/defaults.py b/semantic_router/utils/defaults.py
index 75331c06581ad4692bc24f1633ba5a609ba28e47..151a9935db79e42df254d8b7b119a503cdd77e13 100644
--- a/semantic_router/utils/defaults.py
+++ b/semantic_router/utils/defaults.py
@@ -8,8 +8,8 @@ class EncoderDefault(Enum):
         "language_model": "BAAI/bge-small-en-v1.5",
     }
     OPENAI = {
-        "embedding_model": os.getenv("OPENAI_MODEL_NAME", "text-embedding-ada-002"),
-        "language_model": os.getenv("OPENAI_CHAT_MODEL_NAME", "gpt-3.5-turbo"),
+        "embedding_model": os.getenv("OPENAI_MODEL_NAME", "text-embedding-3-small"),
+        "language_model": os.getenv("OPENAI_CHAT_MODEL_NAME", "gpt-4o"),
     }
     COHERE = {
         "embedding_model": os.getenv("COHERE_MODEL_NAME", "embed-english-v3.0"),
@@ -20,10 +20,10 @@ class EncoderDefault(Enum):
         "language_model": os.getenv("MISTRALAI_CHAT_MODEL_NAME", "mistral-tiny"),
     }
     AZURE = {
-        "embedding_model": os.getenv("AZURE_OPENAI_MODEL", "text-embedding-ada-002"),
-        "language_model": os.getenv("OPENAI_CHAT_MODEL_NAME", "gpt-3.5-turbo"),
+        "embedding_model": os.getenv("AZURE_OPENAI_MODEL", "text-embedding-3-small"),
+        "language_model": os.getenv("OPENAI_CHAT_MODEL_NAME", "gpt-4o"),
         "deployment_name": os.getenv(
-            "AZURE_OPENAI_DEPLOYMENT_NAME", "text-embedding-ada-002"
+            "AZURE_OPENAI_DEPLOYMENT_NAME", "text-embedding-3-small"
         ),
     }
     GOOGLE = {
diff --git a/tests/unit/llms/test_llm_azure_openai.py b/tests/unit/llms/test_llm_azure_openai.py
index a50b08fb85eca26719da69db41b065d4ceef9e86..793091957df7fdd89792aebec43a6fe87763e02b 100644
--- a/tests/unit/llms/test_llm_azure_openai.py
+++ b/tests/unit/llms/test_llm_azure_openai.py
@@ -13,9 +13,7 @@ def azure_openai_llm(mocker):
 class TestOpenAILLM:
     def test_azure_openai_llm_init_with_api_key(self, azure_openai_llm):
         assert azure_openai_llm.client is not None, "Client should be initialized"
-        assert (
-            azure_openai_llm.name == "gpt-3.5-turbo"
-        ), "Default name not set correctly"
+        assert azure_openai_llm.name == "gpt-4o", "Default name not set correctly"
 
     def test_azure_openai_llm_init_success(self, mocker):
         mocker.patch("os.getenv", return_value="fake-api-key")
diff --git a/tests/unit/llms/test_llm_openai.py b/tests/unit/llms/test_llm_openai.py
index c5f987104e5fd097a46c92e057867f3bae406869..4287fc882214c8524c5ce611c11af136eea935c5 100644
--- a/tests/unit/llms/test_llm_openai.py
+++ b/tests/unit/llms/test_llm_openai.py
@@ -43,7 +43,7 @@ example_function_schema = {
 class TestOpenAILLM:
     def test_openai_llm_init_with_api_key(self, openai_llm):
         assert openai_llm.client is not None, "Client should be initialized"
-        assert openai_llm.name == "gpt-3.5-turbo", "Default name not set correctly"
+        assert openai_llm.name == "gpt-4o", "Default name not set correctly"
 
     def test_openai_llm_init_success(self, mocker):
         mocker.patch("os.getenv", return_value="fake-api-key")
diff --git a/tests/unit/test_hybrid_layer.py b/tests/unit/test_hybrid_layer.py
index 416eb93498aa90fd19b62b32dcf815f266f44726..fbf14566e3c9051331a3825fe7d952213ec38985 100644
--- a/tests/unit/test_hybrid_layer.py
+++ b/tests/unit/test_hybrid_layer.py
@@ -40,7 +40,7 @@ def cohere_encoder(mocker):
 @pytest.fixture
 def openai_encoder(mocker):
     mocker.patch.object(OpenAIEncoder, "__call__", side_effect=mock_encoder_call)
-    return OpenAIEncoder(name="text-embedding-ada-002", openai_api_key="test_api_key")
+    return OpenAIEncoder(name="text-embedding-3-small", openai_api_key="test_api_key")
 
 
 @pytest.fixture
@@ -88,8 +88,8 @@ class TestHybridRouteLayer:
             alpha=0.8,
         )
         assert route_layer.index is not None and route_layer.categories is not None
-        assert openai_encoder.score_threshold == 0.82
-        assert route_layer.score_threshold == 0.82
+        assert openai_encoder.score_threshold == 0.3
+        assert route_layer.score_threshold == 0.3
         assert route_layer.top_k == 10
         assert route_layer.alpha == 0.8
         assert len(route_layer.index) == 5
@@ -104,7 +104,7 @@ class TestHybridRouteLayer:
         route_layer_openai = HybridRouteLayer(
             encoder=openai_encoder, sparse_encoder=sparse_encoder
         )
-        assert route_layer_openai.score_threshold == 0.82
+        assert route_layer_openai.score_threshold == 0.3
 
     def test_add_route(self, openai_encoder):
         route_layer = HybridRouteLayer(
diff --git a/tests/unit/test_layer.py b/tests/unit/test_layer.py
index 4566401d0e47c19a03fb03210e775c738dfb894b..05979cd22d05c5ab696c984b2632c572e5e2971e 100644
--- a/tests/unit/test_layer.py
+++ b/tests/unit/test_layer.py
@@ -87,7 +87,7 @@ def cohere_encoder(mocker):
 @pytest.fixture
 def openai_encoder(mocker):
     mocker.patch.object(OpenAIEncoder, "__call__", side_effect=mock_encoder_call)
-    return OpenAIEncoder(name="text-embedding-ada-002", openai_api_key="test_api_key")
+    return OpenAIEncoder(name="text-embedding-3-small", openai_api_key="test_api_key")
 
 
 @pytest.fixture
@@ -155,8 +155,8 @@ class TestRouteLayer:
         route_layer = RouteLayer(
             encoder=openai_encoder, routes=routes, top_k=10, index=index_cls()
         )
-        assert openai_encoder.score_threshold == 0.82
-        assert route_layer.score_threshold == 0.82
+        assert openai_encoder.score_threshold == 0.3
+        assert route_layer.score_threshold == 0.3
         assert route_layer.top_k == 10
         assert len(route_layer.index) if route_layer.index is not None else 0 == 5
         assert (
@@ -172,7 +172,7 @@ class TestRouteLayer:
         assert cohere_encoder.score_threshold == 0.3
         assert route_layer_cohere.score_threshold == 0.3
         route_layer_openai = RouteLayer(encoder=openai_encoder, index=index_cls())
-        assert route_layer_openai.score_threshold == 0.82
+        assert route_layer_openai.score_threshold == 0.3
 
     def test_initialization_no_encoder(self, openai_encoder, index_cls):
         os.environ["OPENAI_API_KEY"] = "test_api_key"
@@ -189,8 +189,8 @@ class TestRouteLayer:
         route_layer_openai = RouteLayer(
             encoder=openai_encoder, routes=dynamic_routes, index=index_cls()
         )
-        assert openai_encoder.score_threshold == 0.82
-        assert route_layer_openai.score_threshold == 0.82
+        assert openai_encoder.score_threshold == 0.3
+        assert route_layer_openai.score_threshold == 0.3
 
     def test_add_route(self, openai_encoder, index_cls):
         route_layer = RouteLayer(encoder=openai_encoder, index=index_cls())
@@ -542,7 +542,7 @@ class TestRouteLayer:
         route_layer = RouteLayer(
             encoder=openai_encoder, routes=routes, index=index_cls()
         )
-        assert route_layer.get_thresholds() == {"Route 1": 0.82, "Route 2": 0.82}
+        assert route_layer.get_thresholds() == {"Route 1": 0.3, "Route 2": 0.3}
 
     def test_with_multiple_routes_passing_threshold(
         self, openai_encoder, routes, index_cls