Skip to content
Snippets Groups Projects
Unverified Commit 54e0a2c1 authored by Daniel Bustamante Ospina's avatar Daniel Bustamante Ospina Committed by GitHub
Browse files

Change `AgentWorkflow` to `FunctionAgent` in documentation. (#18042)

parent 13de07bf
No related branches found
No related tags found
No related merge requests found
......@@ -30,7 +30,7 @@ Let's start with a simple example using an agent that can perform basic multipli
```python
import asyncio
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI
......@@ -41,8 +41,10 @@ def multiply(a: float, b: float) -> float:
# Create an agent workflow with our calculator tool
agent = AgentWorkflow.from_tools_or_functions(
[multiply],
agent = FunctionAgent(
name="Agent",
description="Useful for multiplying two numbers",
tools=[multiply],
llm=OpenAI(model="gpt-4o-mini"),
system_prompt="You are a helpful assistant that can multiply two numbers.",
)
......@@ -111,7 +113,7 @@ Our modified `starter.py` should look like this:
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.openai import OpenAI
import asyncio
import os
......@@ -134,8 +136,10 @@ async def search_documents(query: str) -> str:
# Create an enhanced workflow with both tools
agent = AgentWorkflow.from_tools_or_functions(
[multiply, search_documents],
agent = FunctionAgent(
name="Agent",
description="Useful for multiplying two numbers and searching through documents to answer questions.",
tools=[multiply, search_documents],
llm=OpenAI(model="gpt-4o-mini"),
system_prompt="""You are a helpful assistant that can perform calculations
and search through documents to answer questions.""",
......
......@@ -39,7 +39,7 @@ Let's start with a simple example using an agent that can perform basic multipli
```python
import asyncio
from llama_index.core.agent.workflow import AgentWorkflow
from llama_index.core.agent.workflow import FunctionAgent
from llama_index.llms.ollama import Ollama
......@@ -50,8 +50,10 @@ def multiply(a: float, b: float) -> float:
# Create an agent workflow with our calculator tool
agent = AgentWorkflow.from_tools_or_functions(
[multiply],
agent = FunctionAgent(
name="Agent",
description="Useful for multiplying two numbers",
tools=[multiply],
llm=Ollama(model="llama3.1", request_timeout=360.0),
system_prompt="You are a helpful assistant that can multiply two numbers.",
)
......
# Human in the loop
Tools can also be defined that get a human in the loop. This is useful for tasks that require human input, such as confirming a tool call or providing feedback.
As we'll see in the our [Workflows tutorial](../workflows/index.md), the way Workflows work under the hood of AgentWorkflow is by running steps which both emit and receive events. Here's a diagram of the steps (in blue) that makes up an AgentWorkflow and the events (in green) that pass data between them. You'll recognize these events, they're the same ones we were handling in the output stream earlier.
![Workflows diagram](./agentworkflow.jpg)
To get a human in the loop, we'll get our tool to emit an event that isn't received by any other step in the workflow. We'll then tell our tool to wait until it receives a specific "reply" event.
We have built-in `InputRequiredEvent` and `HumanResponseEvent` events to use for this purpose. If you want to capture different forms of human input, you can subclass these events to match your own preferences. Let's import them:
```python
from llama_index.core.workflow import (
InputRequiredEvent,
HumanResponseEvent,
)
```
Next we'll create a tool that performs a hypothetical dangerous task. There are a couple of new things happening here:
* We're calling `write_event_to_stream` with an `InputRequiredEvent`. This emits an event to the external stream to be captured. You can attach arbitrary data to the event, which we do in the form of a `user_name`.
* We call `wait_for_event`, specifying that we want to wait for a `HumanResponseEvent` and that it must have the `user_name` set to "Laurie". You can see how this would be useful in a multi-user system where more than one incoming event might be involved.
```python
async def dangerous_task(ctx: Context) -> str:
"""A dangerous task that requires human confirmation."""
# emit an event to the external stream to be captured
ctx.write_event_to_stream(
InputRequiredEvent(
prefix="Are you sure you want to proceed? ",
user_name="Laurie",
)
)
# wait until we see a HumanResponseEvent
response = await ctx.wait_for_event(
HumanResponseEvent, requirements={"user_name": "Laurie"}
)
# act on the input from the event
if response.response.strip().lower() == "yes":
return "Dangerous task completed successfully."
else:
return "Dangerous task aborted."
```
We create our agent as usual, passing it the tool we just defined:
```python
workflow = AgentWorkflow.from_tools_or_functions(
[dangerous_task],
llm=llm,
system_prompt="You are a helpful assistant that can perform dangerous tasks.",
)
```
Now we can run the workflow, handling the `InputRequiredEvent` just like any other streaming event, and responding with a `HumanResponseEvent` passed in using the `send_event` method:
```python
handler = workflow.run(user_msg="I want to proceed with the dangerous task.")
async for event in handler.stream_events():
if isinstance(event, InputRequiredEvent):
# capture keyboard input
response = input(event.prefix)
# send our response back
handler.ctx.send_event(
HumanResponseEvent(
response=response,
user_name=event.user_name,
)
)
response = await handler
print(str(response))
```
As usual, you can see the [full code of this example](https://github.com/run-llama/python-agents-tutorial/blob/main/5_human_in_the_loop.py).
You can do anything you want to capture the input; you could use a GUI, or audio input, or even get another, separate agent involved. If your input is going to take a while, or happen in another process, you might want to [serialize the context](./state.md) and save it to a database or file so that you can resume the workflow later.
Speaking of getting other agents involved brings us to our next section, [multi-agent systems](./multi_agent.md).
# Human in the loop
Tools can also be defined that get a human in the loop. This is useful for tasks that require human input, such as confirming a tool call or providing feedback.
As we'll see in the our [Workflows tutorial](../workflows/index.md), the way Workflows work under the hood of AgentWorkflow is by running steps which both emit and receive events. Here's a diagram of the steps (in blue) that makes up an AgentWorkflow and the events (in green) that pass data between them. You'll recognize these events, they're the same ones we were handling in the output stream earlier.
![Workflows diagram](./agentworkflow.jpg)
To get a human in the loop, we'll get our tool to emit an event that isn't received by any other step in the workflow. We'll then tell our tool to wait until it receives a specific "reply" event.
We have built-in `InputRequiredEvent` and `HumanResponseEvent` events to use for this purpose. If you want to capture different forms of human input, you can subclass these events to match your own preferences. Let's import them:
```python
from llama_index.core.workflow import (
InputRequiredEvent,
HumanResponseEvent,
)
```
Next we'll create a tool that performs a hypothetical dangerous task. There are a couple of new things happening here:
* We're calling `write_event_to_stream` with an `InputRequiredEvent`. This emits an event to the external stream to be captured. You can attach arbitrary data to the event, which we do in the form of a `user_name`.
* We call `wait_for_event`, specifying that we want to wait for a `HumanResponseEvent` and that it must have the `user_name` set to "Laurie". You can see how this would be useful in a multi-user system where more than one incoming event might be involved.
```python
async def dangerous_task(ctx: Context) -> str:
"""A dangerous task that requires human confirmation."""
# emit an event to the external stream to be captured
ctx.write_event_to_stream(
InputRequiredEvent(
prefix="Are you sure you want to proceed? ",
user_name="Laurie",
)
)
# wait until we see a HumanResponseEvent
response = await ctx.wait_for_event(
HumanResponseEvent, requirements={"user_name": "Laurie"}
)
# act on the input from the event
if response.response.strip().lower() == "yes":
return "Dangerous task completed successfully."
else:
return "Dangerous task aborted."
```
We create our agent as usual, passing it the tool we just defined:
```python
workflow = FunctionAgent(
name="Agent",
description="Useful for performing dangerous tasks.",
tools=[dangerous_task],
llm=llm,
system_prompt="You are a helpful assistant that can perform dangerous tasks.",
)
```
Now we can run the workflow, handling the `InputRequiredEvent` just like any other streaming event, and responding with a `HumanResponseEvent` passed in using the `send_event` method:
```python
handler = workflow.run(user_msg="I want to proceed with the dangerous task.")
async for event in handler.stream_events():
if isinstance(event, InputRequiredEvent):
# capture keyboard input
response = input(event.prefix)
# send our response back
handler.ctx.send_event(
HumanResponseEvent(
response=response,
user_name=event.user_name,
)
)
response = await handler
print(str(response))
```
As usual, you can see the [full code of this example](https://github.com/run-llama/python-agents-tutorial/blob/main/5_human_in_the_loop.py).
You can do anything you want to capture the input; you could use a GUI, or audio input, or even get another, separate agent involved. If your input is going to take a while, or happen in another process, you might want to [serialize the context](./state.md) and save it to a database or file so that you can resume the workflow later.
Speaking of getting other agents involved brings us to our next section, [multi-agent systems](./multi_agent.md).
......@@ -80,8 +80,10 @@ You could also pick another popular model accessible via API, such as those from
Now we create our agent. It needs an array of tools, an LLM, and a system prompt to tell it what kind of agent to be. Your system prompt would usually be more detailed than this!
```python
workflow = AgentWorkflow.from_tools_or_functions(
[multiply, add],
workflow = FunctionAgent(
name="Agent",
description="Useful for performing basic mathematical operations.",
tools=[multiply, add],
llm=llm,
system_prompt="You are an agent that can perform basic mathematical operations using tools.",
)
......
......@@ -26,8 +26,10 @@ tavily_tool = TavilyToolSpec(api_key=os.getenv("TAVILY_API_KEY"))
Now we'll create an agent using that tool and an LLM that we initialized just like we did previously.
```python
workflow = AgentWorkflow.from_tools_or_functions(
tavily_tool.to_tool_list(),
workflow = FunctionAgent(
name="Agent",
description="Useful for searching the web for information.",
tools=tavily_tool.to_tool_list(),
llm=llm,
system_prompt="You're a helpful assistant that can search the web for information.",
)
......
......@@ -33,9 +33,11 @@ finance_tools.extend([multiply, add])
And we'll ask a different question than last time, necessitating the use of the new tools:
```python
workflow = AgentWorkflow.from_tools_or_functions(
finance_tools,
workflow = FunctionAgent(
name="Agent",
description="Useful for performing financial operations.",
llm=OpenAI(model="gpt-4o-mini"),
tools=finance_tools,
system_prompt="You are a helpful assistant.",
)
......
......@@ -38,7 +38,7 @@ class BaseWorkflowAgent(BaseModel, PromptMixin, ABC):
system_prompt: Optional[str] = Field(
default=None, description="The system prompt for the agent"
)
tools: Optional[List[BaseTool]] = Field(
tools: Optional[List[Union[BaseTool, Callable]]] = Field(
default=None, description="The tools that the agent can use"
)
tool_retriever: Optional[ObjectRetriever] = Field(
......
from abc import ABCMeta
from typing import Any, Callable, Dict, List, Optional, Sequence, Union
from typing import Any, Callable, Dict, List, Optional, Sequence, Union, cast
from llama_index.core.agent.workflow.base_agent import BaseWorkflowAgent
from llama_index.core.agent.workflow.function_agent import FunctionAgent
......@@ -209,7 +209,7 @@ class AgentWorkflow(Workflow, PromptMixin, metaclass=AgentWorkflowMeta):
if handoff_tool:
tools.append(handoff_tool)
return self._ensure_tools_are_async(tools)
return self._ensure_tools_are_async(cast(List[BaseTool], tools))
async def _init_context(self, ctx: Context, ev: StartEvent) -> None:
"""Initialize the context once, if needed."""
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment