Skip to content
Snippets Groups Projects
Unverified Commit 40ee7610 authored by Marcus Schiesser's avatar Marcus Schiesser Committed by GitHub
Browse files

feat: add asQueryTool to index and add factory methods for simplifying agent usage (#1715)

parent c14a21bc
No related branches found
No related tags found
No related merge requests found
Showing
with 373 additions and 220 deletions
---
"llamaindex": patch
"@llamaindex/workflow": patch
"@llamaindex/core": patch
---
Add factory methods agent and multiAgent to simplify agent usage
---
"llamaindex": patch
---
feat: add asQueryTool to index
......@@ -125,19 +125,20 @@ const response = await agent.chat({
description="Truly powerful retrieval-augmented generation applications use agentic techniques, and LlamaIndex.TS makes it easy to build them."
>
<CodeBlock
code={`import { FunctionTool } from "llamaindex";
import { OpenAIAgent } from "@llamaindex/openai";
code={`import { agent } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";
const interpreterTool = FunctionTool.from(...);
const systemPrompt = \`...\`;
// using a previously created LlamaIndex index to query information from
const queryTool = index.queryTool();
const agent = new OpenAIAgent({
llm,
tools: [interpreterTool],
systemPrompt,
const agent = agent({
llm: new OpenAI({
model: "gpt-4o",
}),
tools: [queryTool],
});
await agent.chat('...');`}
await agent.run('...');`}
lang="ts"
/>
</Feature>
......
......@@ -6,25 +6,7 @@ import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/agentworkflow/blog_writer.ts";
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
`AgentWorkflow` is a powerful system that enables you to create and orchestrate one or multiple agents with tools to perform specific tasks. It's built on top of the base `Workflow` system and provides a streamlined interface for agent interactions.
## Installation
You'll need to install the `@llamaindex/workflow` package:
<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install @llamaindex/workflow
```
```shell tab="yarn"
yarn add @llamaindex/workflow
```
```shell tab="pnpm"
pnpm add @llamaindex/workflow
```
</Tabs>
Agent Workflows are a powerful system that enables you to create and orchestrate one or multiple agents with tools to perform specific tasks. It's built on top of the base `Workflow` system and provides a streamlined interface for agent interactions.
## Usage
......@@ -33,7 +15,7 @@ You'll need to install the `@llamaindex/workflow` package:
The simplest use case is creating a single agent with specific tools. Here's an example of creating an assistant that tells jokes:
```typescript
import { AgentWorkflow, FunctionTool } from "llamaindex";
import { agent, FunctionTool } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";
// Define a joke-telling tool
......@@ -45,8 +27,8 @@ const jokeTool = FunctionTool.from(
}
);
// Create an agent workflow with the tool
const workflow = AgentWorkflow.fromTools({
// Create an single agent workflow with the tool
const workflow = agent({
tools: [jokeTool],
llm: new OpenAI({
model: "gpt-4o-mini",
......@@ -60,7 +42,7 @@ console.log(result); // Baby Llama is called cria
### Event Streaming
`AgentWorkflow` provides a unified interface for event streaming, making it easy to track and respond to different events during execution:
Agent Workflows provide a unified interface for event streaming, making it easy to track and respond to different events during execution:
```typescript
import { AgentToolCall, AgentStream } from "llamaindex";
......@@ -81,7 +63,7 @@ for await (const event of context) {
### Multi-Agent Workflow
`AgentWorkflow` can orchestrate multiple agents, enabling complex interactions and task handoffs. Each agent in a multi-agent workflow requires:
An Agent Workflow can orchestrate multiple agents, enabling complex interactions and task handoffs. Each agent in a multi-agent workflow requires:
- `name`: Unique identifier for the agent
- `description`: Purpose description used for task routing
......@@ -91,12 +73,12 @@ for await (const event of context) {
Here's an example of a multi-agent system that combines joke-telling and weather information:
```typescript
import { AgentWorkflow, FunctionAgent, FunctionTool } from "llamaindex";
import { multiAgent, agent, FunctionTool } from "llamaindex";
import { OpenAI } from "@llamaindex/openai";
import { z } from "zod";
// Create a weather agent
const weatherAgent = new FunctionAgent({
const weatherAgent = agent({
name: "WeatherAgent",
description: "Provides weather information for any city",
tools: [
......@@ -115,7 +97,7 @@ const weatherAgent = new FunctionAgent({
});
// Create a joke-telling agent
const jokeAgent = new FunctionAgent({
const jokeAgent = agent({
name: "JokeAgent",
description: "Tells jokes and funny stories",
tools: [jokeTool], // Using the joke tool defined earlier
......@@ -124,7 +106,7 @@ const jokeAgent = new FunctionAgent({
});
// Create the multi-agent workflow
const workflow = new AgentWorkflow({
const workflow = multiAgent({
agents: [jokeAgent, weatherAgent],
rootAgent: jokeAgent, // Start with the joke agent
});
......
"use server";
import { HuggingFaceEmbedding } from "@llamaindex/huggingface";
import { SimpleDirectoryReader } from "@llamaindex/readers/directory";
import {
OpenAI,
OpenAIAgent,
QueryEngineTool,
Settings,
VectorStoreIndex,
} from "llamaindex";
import { OpenAI, OpenAIAgent, Settings, VectorStoreIndex } from "llamaindex";
Settings.llm = new OpenAI({
apiKey: process.env.NEXT_PUBLIC_OPENAI_KEY ?? "FAKE_KEY_TO_PASS_TESTS",
......@@ -31,23 +25,20 @@ export async function getOpenAIModelRequest(query: string) {
const reader = new SimpleDirectoryReader();
const documents = await reader.loadData(currentDir);
const index = await VectorStoreIndex.fromDocuments(documents);
const retriever = index.asRetriever({
similarityTopK: 10,
});
const queryEngine = index.asQueryEngine({
retriever,
});
// define the query engine as a tool
const tools = [
new QueryEngineTool({
queryEngine: queryEngine,
index.queryTool({
options: {
similarityTopK: 10,
},
metadata: {
name: "deployment_details_per_env",
description: `This tool can answer detailed questions about deployments happened in various environments.`,
},
}),
];
// create the agent
const agent = new OpenAIAgent({ tools });
......
import { OpenAI } from "@llamaindex/openai";
import { AgentWorkflow, FunctionTool } from "llamaindex";
import { FunctionTool, agent } from "llamaindex";
import { z } from "zod";
const csvData =
......@@ -33,7 +33,7 @@ const userQuestion = "which are the best comedies after 2010?";
const systemPrompt =
"You are a Python interpreter.\n - You are given tasks to complete and you run python code to solve them.\n - The python code runs in a Jupyter notebook. Every time you call $(interpreter) tool, the python code is executed in a separate cell. It's okay to make multiple calls to $(interpreter).\n - Display visualizations using matplotlib or any other visualization library directly in the notebook. Shouldn't save the visualizations to a file, just return the base64 encoded data.\n - You can install any pip package (if it exists) if you need to but the usual packages for data analysis are already preinstalled.\n - You can run any python code you want in a secure environment.";
const workflow = AgentWorkflow.fromTools({
const workflow = agent({
tools: [interpreterTool],
llm,
verbose: false,
......
import { OpenAI } from "@llamaindex/openai";
import { AgentWorkflow, FunctionTool } from "llamaindex";
import { FunctionTool, agent } from "llamaindex";
import { z } from "zod";
const sumNumbers = FunctionTool.from(
......@@ -27,7 +27,7 @@ const divideNumbers = FunctionTool.from(
);
async function main() {
const workflow = AgentWorkflow.fromTools({
const workflow = agent({
tools: [sumNumbers, divideNumbers],
llm: new OpenAI({ model: "gpt-4o-mini" }),
verbose: false,
......
import { OpenAI } from "@llamaindex/openai";
import { AgentStream, AgentWorkflow } from "llamaindex";
import { AgentStream, agent } from "llamaindex";
import { WikipediaTool } from "../wiki";
async function main() {
const llm = new OpenAI({ model: "gpt-4-turbo" });
const wikiTool = new WikipediaTool();
const workflow = AgentWorkflow.fromTools({
const workflow = agent({
tools: [wikiTool],
llm,
verbose: false,
......
import { OpenAI } from "@llamaindex/openai";
import fs from "fs";
import {
agent,
AgentToolCall,
AgentToolCallResult,
AgentWorkflow,
FunctionAgent,
FunctionTool,
multiAgent,
} from "llamaindex";
import os from "os";
import { z } from "zod";
......@@ -34,7 +34,7 @@ const saveFileTool = FunctionTool.from(
);
async function main() {
const reportAgent = new FunctionAgent({
const reportAgent = agent({
name: "ReportAgent",
description:
"Responsible for crafting well-written blog posts based on research findings",
......@@ -43,7 +43,7 @@ async function main() {
llm,
});
const researchAgent = new FunctionAgent({
const researchAgent = agent({
name: "ResearchAgent",
description:
"Responsible for gathering relevant information from the internet",
......@@ -53,7 +53,7 @@ async function main() {
llm,
});
const workflow = new AgentWorkflow({
const workflow = multiAgent({
agents: [researchAgent, reportAgent],
rootAgent: researchAgent,
});
......
......@@ -5,14 +5,14 @@
*/
import { OpenAI } from "@llamaindex/openai";
import {
agent,
AgentInput,
AgentOutput,
AgentStream,
AgentToolCall,
AgentToolCallResult,
AgentWorkflow,
FunctionAgent,
FunctionTool,
multiAgent,
StopEvent,
} from "llamaindex";
import { z } from "zod";
......@@ -55,7 +55,7 @@ const temperatureFetcherTool = FunctionTool.from(
// Create agents
async function multiWeatherAgent() {
const converterAgent = new FunctionAgent({
const converterAgent = agent({
name: "TemperatureConverterAgent",
description:
"An agent that can convert temperatures from Fahrenheit to Celsius.",
......@@ -63,7 +63,7 @@ async function multiWeatherAgent() {
llm,
});
const weatherAgent = new FunctionAgent({
const weatherAgent = agent({
name: "FetchWeatherAgent",
description: "An agent that can get the weather in a city. ",
systemPrompt:
......@@ -76,7 +76,7 @@ async function multiWeatherAgent() {
});
// Create agent workflow with the agents
const workflow = new AgentWorkflow({
const workflow = multiAgent({
agents: [weatherAgent, converterAgent],
rootAgent: weatherAgent,
verbose: false,
......
......@@ -2,17 +2,15 @@
* This example shows how to use AgentWorkflow as a single agent with tools
*/
import { OpenAI } from "@llamaindex/openai";
import { AgentWorkflow, Settings } from "llamaindex";
import { Settings, agent } from "llamaindex";
import { getWeatherTool } from "../agent/utils/tools";
const llm = new OpenAI({
Settings.llm = new OpenAI({
model: "gpt-4o",
});
Settings.llm = llm;
async function singleWeatherAgent() {
const workflow = AgentWorkflow.fromTools({
const workflow = agent({
tools: [getWeatherTool],
verbose: false,
});
......
import fs from "fs";
import {
agent,
AgentToolCall,
AgentToolCallResult,
AgentWorkflow,
FunctionAgent,
FunctionTool,
multiAgent,
} from "llamaindex";
import { z } from "zod";
......@@ -63,7 +63,7 @@ const saveFileTool = FunctionTool.from(
);
async function main() {
const reportAgent = new FunctionAgent({
const reportAgent = agent({
name: "ReportAgent",
description:
"Responsible for creating concise reports about weather and inflation data",
......@@ -72,7 +72,7 @@ async function main() {
llm,
});
const researchAgent = new FunctionAgent({
const researchAgent = agent({
name: "ResearchAgent",
description:
"Responsible for gathering relevant information from the internet",
......@@ -82,7 +82,7 @@ async function main() {
llm,
});
const workflow = new AgentWorkflow({
const workflow = multiAgent({
agents: [researchAgent, reportAgent],
rootAgent: researchAgent,
});
......
......@@ -80,8 +80,9 @@ export {
extractText,
imageToDataUrl,
messagesToHistory,
MockLLM,
toToolDescriptions,
} from "./llms";
export { MockLLM } from "./mock";
export { objectEntries } from "./object-entries";
......@@ -2,15 +2,6 @@ import { fs } from "@llamaindex/env";
import { filetypemime } from "magic-bytes.js";
import type {
ChatMessage,
ChatResponse,
ChatResponseChunk,
CompletionResponse,
LLM,
LLMChatParamsNonStreaming,
LLMChatParamsStreaming,
LLMCompletionParamsNonStreaming,
LLMCompletionParamsStreaming,
LLMMetadata,
MessageContent,
MessageContentDetail,
MessageContentTextDetail,
......@@ -152,82 +143,3 @@ export async function imageToDataUrl(
}
return await blobToDataUrl(input);
}
export class MockLLM implements LLM {
metadata: LLMMetadata;
options: {
timeBetweenToken: number;
responseMessage: string;
};
constructor(options?: {
timeBetweenToken?: number;
responseMessage?: string;
metadata?: LLMMetadata;
}) {
this.options = {
timeBetweenToken: options?.timeBetweenToken ?? 20,
responseMessage: options?.responseMessage ?? "This is a mock response",
};
this.metadata = options?.metadata ?? {
model: "MockLLM",
temperature: 0.5,
topP: 0.5,
contextWindow: 1024,
tokenizer: undefined,
};
}
chat(
params: LLMChatParamsStreaming<object, object>,
): Promise<AsyncIterable<ChatResponseChunk>>;
chat(
params: LLMChatParamsNonStreaming<object, object>,
): Promise<ChatResponse<object>>;
async chat(
params:
| LLMChatParamsStreaming<object, object>
| LLMChatParamsNonStreaming<object, object>,
): Promise<AsyncIterable<ChatResponseChunk> | ChatResponse<object>> {
const responseMessage = this.options.responseMessage;
const timeBetweenToken = this.options.timeBetweenToken;
if (params.stream) {
return (async function* () {
for (const char of responseMessage) {
yield { delta: char, raw: {} };
await new Promise((resolve) => setTimeout(resolve, timeBetweenToken));
}
})();
}
return {
message: { content: responseMessage, role: "assistant" },
raw: {},
};
}
async complete(
params: LLMCompletionParamsStreaming,
): Promise<AsyncIterable<CompletionResponse>>;
async complete(
params: LLMCompletionParamsNonStreaming,
): Promise<CompletionResponse>;
async complete(
params: LLMCompletionParamsStreaming | LLMCompletionParamsNonStreaming,
): Promise<AsyncIterable<CompletionResponse> | CompletionResponse> {
const responseMessage = this.options.responseMessage;
const timeBetweenToken = this.options.timeBetweenToken;
if (params.stream) {
return (async function* () {
for (const char of responseMessage) {
yield { delta: char, text: char, raw: {} };
await new Promise((resolve) => setTimeout(resolve, timeBetweenToken));
}
})();
}
return { text: responseMessage, raw: {} };
}
}
// TODO: move to a test package
import { ToolCallLLM } from "../llms/base";
import type {
ChatResponse,
ChatResponseChunk,
CompletionResponse,
LLMChatParamsNonStreaming,
LLMChatParamsStreaming,
LLMCompletionParamsNonStreaming,
LLMCompletionParamsStreaming,
LLMMetadata,
} from "../llms/type";
export class MockLLM extends ToolCallLLM {
metadata: LLMMetadata;
options: {
timeBetweenToken: number;
responseMessage: string;
};
supportToolCall: boolean = false;
constructor(options?: {
timeBetweenToken?: number;
responseMessage?: string;
metadata?: LLMMetadata;
}) {
super();
this.options = {
timeBetweenToken: options?.timeBetweenToken ?? 20,
responseMessage: options?.responseMessage ?? "This is a mock response",
};
this.metadata = options?.metadata ?? {
model: "MockLLM",
temperature: 0.5,
topP: 0.5,
contextWindow: 1024,
tokenizer: undefined,
};
}
chat(
params: LLMChatParamsStreaming<object, object>,
): Promise<AsyncIterable<ChatResponseChunk>>;
chat(
params: LLMChatParamsNonStreaming<object, object>,
): Promise<ChatResponse<object>>;
async chat(
params:
| LLMChatParamsStreaming<object, object>
| LLMChatParamsNonStreaming<object, object>,
): Promise<AsyncIterable<ChatResponseChunk> | ChatResponse<object>> {
const responseMessage = this.options.responseMessage;
const timeBetweenToken = this.options.timeBetweenToken;
if (params.stream) {
return (async function* () {
for (const char of responseMessage) {
yield { delta: char, raw: {} };
await new Promise((resolve) => setTimeout(resolve, timeBetweenToken));
}
})();
}
return {
message: { content: responseMessage, role: "assistant" },
raw: {},
};
}
async complete(
params: LLMCompletionParamsStreaming,
): Promise<AsyncIterable<CompletionResponse>>;
async complete(
params: LLMCompletionParamsNonStreaming,
): Promise<CompletionResponse>;
async complete(
params: LLMCompletionParamsStreaming | LLMCompletionParamsNonStreaming,
): Promise<AsyncIterable<CompletionResponse> | CompletionResponse> {
const responseMessage = this.options.responseMessage;
const timeBetweenToken = this.options.timeBetweenToken;
if (params.stream) {
return (async function* () {
for (const char of responseMessage) {
yield { delta: char, text: char, raw: {} };
await new Promise((resolve) => setTimeout(resolve, timeBetweenToken));
}
})();
}
return { text: responseMessage, raw: {} };
}
}
......@@ -2,15 +2,21 @@ import type {
BaseChatEngine,
ContextChatEngineOptions,
} from "@llamaindex/core/chat-engine";
import type { ToolMetadata } from "@llamaindex/core/llms";
import type { BaseQueryEngine } from "@llamaindex/core/query-engine";
import type { BaseSynthesizer } from "@llamaindex/core/response-synthesizers";
import type { BaseRetriever } from "@llamaindex/core/retriever";
import type { BaseNode, Document } from "@llamaindex/core/schema";
import type { BaseDocumentStore } from "@llamaindex/core/storage/doc-store";
import type { BaseIndexStore } from "@llamaindex/core/storage/index-store";
import type { JSONSchemaType } from "ajv";
import { runTransformations } from "../ingestion/IngestionPipeline.js";
import { Settings } from "../Settings.js";
import type { StorageContext } from "../storage/StorageContext.js";
import {
type QueryEngineParam,
QueryEngineTool,
} from "../tools/QueryEngineTool.js";
export interface BaseIndexInit<T> {
storageContext: StorageContext;
......@@ -19,6 +25,24 @@ export interface BaseIndexInit<T> {
indexStruct: T;
}
/**
* Common parameter type for queryTool and asQueryTool
*/
export type QueryToolParams = (
| {
// eslint-disable-next-line @typescript-eslint/no-explicit-any
options: any;
retriever?: never;
}
| {
options?: never;
retriever?: BaseRetriever;
}
) & {
responseSynthesizer?: BaseSynthesizer;
metadata?: ToolMetadata<JSONSchemaType<QueryEngineParam>> | undefined;
};
/**
* Indexes are the data structure that we store our nodes and embeddings in so
* they can be retrieved for our queries.
......@@ -61,6 +85,22 @@ export abstract class BaseIndex<T> {
options?: Omit<ContextChatEngineOptions, "retriever">,
): BaseChatEngine;
/**
* Returns a query tool by calling asQueryEngine.
* Either options or retriever can be passed, but not both.
* If options are provided, they are passed to generate a retriever.
*/
asQueryTool(params: QueryToolParams): QueryEngineTool {
if (params.options) {
params.retriever = this.asRetriever(params.options);
}
return new QueryEngineTool({
queryEngine: this.asQueryEngine(params),
metadata: params?.metadata,
});
}
/**
* Insert a document into the index.
* @param document
......@@ -76,4 +116,33 @@ export abstract class BaseIndex<T> {
refDocId: string,
deleteFromDocStore?: boolean,
): Promise<void>;
/**
* Alias for asRetriever
* @param options
*/
// eslint-disable-next-line @typescript-eslint/no-explicit-any
retriever(options?: any): BaseRetriever {
return this.asRetriever(options);
}
/**
* Alias for asQueryEngine
* @param options you can supply your own custom Retriever and ResponseSynthesizer
*/
queryEngine(options?: {
retriever?: BaseRetriever;
responseSynthesizer?: BaseSynthesizer;
}): BaseQueryEngine {
return this.asQueryEngine(options);
}
/**
* Alias for asQueryTool
* Either options or retriever can be passed, but not both.
* If options are provided, they are passed to generate a retriever.
*/
queryTool(params: QueryToolParams): QueryEngineTool {
return this.asQueryTool(params);
}
}
......@@ -19,7 +19,7 @@ const DEFAULT_PARAMETERS: JSONSchemaType<QueryEngineParam> = {
export type QueryEngineToolParams = {
queryEngine: BaseQueryEngine;
metadata: ToolMetadata<JSONSchemaType<QueryEngineParam>>;
metadata?: ToolMetadata<JSONSchemaType<QueryEngineParam>> | undefined;
};
export type QueryEngineParam = {
......
......@@ -79,13 +79,15 @@
},
"scripts": {
"dev": "bunchee --watch",
"build": "bunchee"
"build": "bunchee",
"test": "vitest run"
},
"devDependencies": {
"@llamaindex/env": "workspace:*",
"@llamaindex/core": "workspace:*",
"@types/node": "^22.9.0",
"bunchee": "6.4.0"
"bunchee": "6.4.0",
"vitest": "^2.1.5"
},
"peerDependencies": {
"@llamaindex/env": "workspace:*",
......
import type {
BaseToolWithCall,
ChatMessage,
ToolCallLLM,
} from "@llamaindex/core/llms";
import type { ChatMessage } from "@llamaindex/core/llms";
import { ChatMemoryBuffer } from "@llamaindex/core/memory";
import { PromptTemplate } from "@llamaindex/core/prompts";
import { FunctionTool } from "@llamaindex/core/tools";
......@@ -19,9 +15,9 @@ import {
AgentToolCall,
AgentToolCallResult,
} from "./events";
import { FunctionAgent } from "./function-agent";
import { FunctionAgent, type FunctionAgentParams } from "./function-agent";
export const DEFAULT_HANDOFF_PROMPT = new PromptTemplate({
const DEFAULT_HANDOFF_PROMPT = new PromptTemplate({
template: `Useful for handing off to another agent.
If you are currently not equipped to handle the user's request, or another agent is better suited to handle the request, please hand off to the appropriate agent.
......@@ -30,7 +26,7 @@ Currently available agents:
`,
});
export const DEFAULT_HANDOFF_OUTPUT_PROMPT = new PromptTemplate({
const DEFAULT_HANDOFF_OUTPUT_PROMPT = new PromptTemplate({
template: `Agent {to_agent} is now handling the request due to the following reason: {reason}.\nPlease continue with the current request.`,
});
......@@ -56,17 +52,30 @@ export class AgentStepEvent extends WorkflowEvent<{
toolCalls: AgentToolCall[];
}> {}
export type SingleAgentParams = FunctionAgentParams & {
/**
* Whether to log verbose output
*/
verbose?: boolean;
/**
* Timeout for the workflow in seconds
*/
timeout?: number;
};
export type AgentWorkflowParams = {
/**
* List of agents to include in the workflow.
* Need at least one agent.
* Can also be an array of AgentWorkflow objects, in which case the agents from each workflow will be extracted.
*/
agents: BaseWorkflowAgent[];
agents: BaseWorkflowAgent[] | AgentWorkflow[];
/**
* The agent to start the workflow with.
* Must be an agent in the `agents` list.
* Can also be an AgentWorkflow object, in which case the workflow must have exactly one agent.
*/
rootAgent: BaseWorkflowAgent;
rootAgent: BaseWorkflowAgent | AgentWorkflow;
verbose?: boolean;
/**
* Timeout for the workflow in seconds.
......@@ -74,6 +83,24 @@ export type AgentWorkflowParams = {
timeout?: number;
};
/**
* Create a multi-agent workflow
* @param params - Parameters for the AgentWorkflow
* @returns A new AgentWorkflow instance
*/
export const multiAgent = (params: AgentWorkflowParams): AgentWorkflow => {
return new AgentWorkflow(params);
};
/**
* Create a simple workflow with a single agent and specified tools
* @param params - Parameters for the single agent workflow
* @returns A new AgentWorkflow instance
*/
export const agent = (params: SingleAgentParams): AgentWorkflow => {
return AgentWorkflow.fromTools(params);
};
/**
* AgentWorkflow - An event-driven workflow for executing agents with tools
*
......@@ -93,12 +120,47 @@ export class AgentWorkflow {
timeout: timeout ?? 60,
});
this.verbose = verbose ?? false;
this.rootAgentName = rootAgent.name;
// Handle AgentWorkflow cases for agents
const processedAgents: BaseWorkflowAgent[] = [];
if (agents.length > 0) {
if (agents[0] instanceof AgentWorkflow) {
// If agents is AgentWorkflow[], extract the BaseWorkflowAgent from each workflow
const agentWorkflows = agents as AgentWorkflow[];
agentWorkflows.forEach((workflow) => {
const workflowAgents = workflow.getAgents();
processedAgents.push(...workflowAgents);
});
} else {
// Otherwise, agents is already BaseWorkflowAgent[]
processedAgents.push(...(agents as BaseWorkflowAgent[]));
}
}
// Handle AgentWorkflow case for rootAgent and set rootAgentName
if (rootAgent instanceof AgentWorkflow) {
// If rootAgent is an AgentWorkflow, check if it has exactly one agent
const rootAgents = rootAgent.getAgents();
if (rootAgents.length !== 1) {
throw new Error(
`Root agent must be a single agent, but it is a workflow with ${rootAgents.length} agents`,
);
}
// We know rootAgents[0] exists because we checked length === 1 above
this.rootAgentName = rootAgents[0]!.name;
} else {
// Otherwise, rootAgent is already a BaseWorkflowAgent
this.rootAgentName = rootAgent.name;
}
// Validate root agent
if (!agents.some((a) => a.name === this.rootAgentName)) {
throw new Error(`Root agent ${rootAgent} not found in agents`);
if (!processedAgents.some((a) => a.name === this.rootAgentName)) {
throw new Error(`Root agent ${this.rootAgentName} not found in agents`);
}
this.addAgents(agents ?? []);
this.addAgents(processedAgents);
}
private validateAgent(agent: BaseWorkflowAgent) {
......@@ -141,6 +203,9 @@ export class AgentWorkflow {
});
}
/**
* Adds a new agent to the workflow
*/
addAgent(agent: BaseWorkflowAgent): this {
this.agents.set(agent.name, agent);
this.validateAgent(agent);
......@@ -148,35 +213,34 @@ export class AgentWorkflow {
return this;
}
/**
* Gets all agents in this workflow
* @returns Array of agents in this workflow
*/
getAgents(): BaseWorkflowAgent[] {
return Array.from(this.agents.values());
}
/**
* Create a simple workflow with a single agent and specified tools
* @param params - Parameters for the single agent workflow
* @returns A new AgentWorkflow instance
*/
static fromTools({
tools,
llm,
systemPrompt,
verbose,
timeout,
}: {
tools: BaseToolWithCall[];
llm?: ToolCallLLM;
systemPrompt?: string;
verbose?: boolean;
timeout?: number;
}): AgentWorkflow {
static fromTools(params: SingleAgentParams): AgentWorkflow {
const agent = new FunctionAgent({
name: "Agent",
description: "A single agent that uses the provided tools or functions.",
tools,
llm,
systemPrompt,
name: params.name,
description: params.description,
tools: params.tools,
llm: params.llm,
systemPrompt: params.systemPrompt,
});
const workflow = new AgentWorkflow({
agents: [agent],
rootAgent: agent,
verbose: verbose ?? false,
timeout: timeout ?? 60,
verbose: params.verbose ?? false,
timeout: params.timeout ?? 60,
});
return workflow;
......
import type { JSONObject } from "@llamaindex/core/global";
import { Settings } from "@llamaindex/core/global";
import type {
BaseToolWithCall,
ChatMessage,
ChatResponseChunk,
import {
ToolCallLLM,
type BaseToolWithCall,
type ChatMessage,
type ChatResponseChunk,
} from "@llamaindex/core/llms";
import { BaseMemory } from "@llamaindex/core/memory";
import type { HandlerContext } from "../workflow-context";
import { AgentWorkflow } from "./agent-workflow";
import { type AgentWorkflowContext, type BaseWorkflowAgent } from "./base";
import {
AgentOutput,
......@@ -20,7 +21,10 @@ const DEFAULT_SYSTEM_PROMPT =
"You are a helpful assistant. Use the provided tools to answer questions.";
export type FunctionAgentParams = {
name: string;
/**
* Agent name
*/
name?: string | undefined;
/**
* LLM to use for the agent, required.
*/
......@@ -29,15 +33,16 @@ export type FunctionAgentParams = {
* Description of the agent, useful for task assignment.
* Should provide the capabilities or responsibilities of the agent.
*/
description: string;
description?: string | undefined;
/**
* List of tools that the agent can use, requires at least one tool.
*/
tools: BaseToolWithCall[];
/**
* List of agents that this agent can delegate tasks to
* Can be a list of agent names as strings, BaseWorkflowAgent instances, or AgentWorkflow instances
*/
canHandoffTo?: string[] | BaseWorkflowAgent[] | undefined;
canHandoffTo?: string[] | BaseWorkflowAgent[] | AgentWorkflow[] | undefined;
/**
* Custom system prompt for the agent
*/
......@@ -60,20 +65,43 @@ export class FunctionAgent implements BaseWorkflowAgent {
canHandoffTo,
systemPrompt,
}: FunctionAgentParams) {
this.name = name;
this.name = name ?? "Agent";
this.llm = llm ?? (Settings.llm as ToolCallLLM);
this.description = description;
if (!this.llm.supportToolCall) {
throw new Error("FunctionAgent requires an LLM that supports tool calls");
}
this.description =
description ??
"A single agent that uses the provided tools or functions.";
this.tools = tools;
if (tools.length === 0) {
throw new Error("FunctionAgent must have at least one tool");
}
this.canHandoffTo =
Array.isArray(canHandoffTo) &&
canHandoffTo.every((item) => typeof item === "string")
? canHandoffTo
: (canHandoffTo?.map((agent) =>
typeof agent === "string" ? agent : agent.name,
) ?? []);
// Process canHandoffTo to extract agent names
this.canHandoffTo = [];
if (canHandoffTo) {
if (Array.isArray(canHandoffTo)) {
if (canHandoffTo.length > 0) {
if (typeof canHandoffTo[0] === "string") {
// string[] case
this.canHandoffTo = canHandoffTo as string[];
} else if (canHandoffTo[0] instanceof AgentWorkflow) {
// AgentWorkflow[] case
const workflows = canHandoffTo as AgentWorkflow[];
workflows.forEach((workflow) => {
const agentNames = workflow
.getAgents()
.map((agent) => agent.name);
this.canHandoffTo.push(...agentNames);
});
} else {
// BaseWorkflowAgent[] case
const agents = canHandoffTo as BaseWorkflowAgent[];
this.canHandoffTo = agents.map((agent) => agent.name);
}
}
}
}
const uniqueHandoffAgents = new Set(this.canHandoffTo);
if (uniqueHandoffAgents.size !== this.canHandoffTo.length) {
throw new Error("Duplicate handoff agents");
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment