Skip to content
Snippets Groups Projects
Code owners
Assign users and groups as approvers for specific file changes. Learn more.
2024-04-26-v0.3.0.md 13.73 KiB
title: LlamaIndexTS v0.3.0
description: This is my first post on Docusaurus.
slug: welcome-llamaindexts-v0.3
authors:
  - name: Alex Yang
    title: LlamaIndexTS maintainer, Node.js Member
    url: https://github.com/himself65
    image_url: https://github.com/himself65.png
tags: [llamaindex, agent]
hide_table_of_contents: false

What's new in LlamaIndexTS v0.3.0

Agents

In this release, we've not only ported the Agent module from the LlamaIndex Python version but have significantly enhanced it to be more powerful and user-friendly for JavaScript/TypeScript applications.

Starting from v0.3.0, we are introducing multiple agents specifically designed for RAG applications, including:

  • OpenAIAgent
  • AnthropicAgent
  • ReActAgent:
import { OpenAIAgent } from "llamaindex";
import { tools } from "./tools";

const agent = new OpenAIAgent({
  tools: [...tools],
});
const { response } = await agent.chat({
  message: "What is weather today?",
  stream: false,
});

console.log(response.message.content);

We are also introducing the abstract AgentRunner class, which allows you to create your own agent by simply implementing the task handler.

import { AgentRunner, OpenAI } from "llamaindex";

class MyLLM extends OpenAI {}

export class MyAgentWorker extends AgentWorker<MyLLM> {
  taskHandler = MyAgent.taskHandler;
}

export class MyAgent extends AgentRunner<MyLLM> {
  constructor(params: Params) {
    super({
      llm: params.llm,
      chatHistory: params.chatHistory ?? [],
      systemPrompt: params.systemPrompt ?? null,
      runner: new MyAgentWorker(),
      tools:
        "tools" in params
          ? params.tools
          : params.toolRetriever.retrieve.bind(params.toolRetriever),
    });
  }

  // create store is a function to create a store for each task, by default it only includes `messages` and `toolOutputs`
  createStore = AgentRunner.defaultCreateStore;

  static taskHandler: TaskHandler<Anthropic> = async (step) => {
    const { input } = step;
    const { llm, stream } = step.context;
    if (input) {
      step.context.store.messages = [...step.context.store.messages, input];
    }
    // initialize the input
    const response = await llm.chat({
      stream,
      messages: step.context.store.messages,
    });
    // store the response for next task step
    step.context.store.messages = [
      ...step.context.store.messages,
      response.message,
    ];
    // your logic here to decide whether to continue the task
    const shouldContinue = Math.random(); /* <-- replace with your logic here */
    if (shouldContinue) {
      // if you want to continue the task, you can insert your new context for the next task step
      step.context.store.messages = [
        ...step.context.store.messages,
        {
          content: "INSERT MY NEW DATA",
          role: "user",
        },
      ];
      return {
        taskStep: step,
        output: response,
        isLast: false,
      };
    } else {
      // if you want to end the task, you can return the response with `isLast: true`
      return {
        taskStep: step,
        output: response,
        isLast: true,
      };
    }
  };
}

Web Stream API for Streaming response

Web Stream is a web standard utilized in many modern web frameworks and libraries (like React 19, Deno, Node 22). We have migrated streaming responses to Web Stream to ensure broader compatibility.

For instance, you can use the streaming response in a simple HTTP Server:

import { createServer } from "http";
import { OpenAIAgent } from "llamaindex";
import { OpenAIStream, streamToResponse } from "ai";
import { tools } from "./tools";

const agent = new OpenAIAgent({
  tools: [...tools],
});

const server = createServer(async (req, res) => {
  const response = await agent.chat({
    message: "What is weather today?",
    stream: true,
  });

  // Transform the response into a string readable stream
  const stream: ReadableStream<string> = response.pipeThrough(
    new TransformStream({
      transform: (chunk, controller) => {
        controller.enqueue(chunk.response.delta);
      },
    }),
  );

  // Pipe the stream to the response
  streamToResponse(stream, res);
});

server.listen(3000);

Or it can be integrated into React Server Components (RSC) in Next.js:

// app/actions/index.tsx
"use server";
import { createStreamableUI } from "ai/rsc";
import { OpenAIAgent } from "llamaindex";
import type { ChatMessage } from "llamaindex/llm/types";

export async function chatWithAgent(
  question: string,
  prevMessages: ChatMessage[] = [],
) {
  const agent = new OpenAIAgent({
    tools: [],
  });
  const responseStream = await agent.chat({
    stream: true,
    message: question,
    chatHistory: prevMessages,
  });
  const uiStream = createStreamableUI(<div>loading...</div>);
  responseStream
    .pipeTo(
      new WritableStream({
        start: () => {
          uiStream.update("response:");
        },
        write: async (message) => {
          uiStream.append(message.response.delta);
        },
      }),
    )
    .catch(uiStream.error);
  return uiStream.value;
}
// app/src/page.tsx
"use client";
import { chatWithAgent } from "@/actions";
import type { JSX } from "react";
import { useFormState } from "react-dom";

export const runtime = "edge";

export default function Home() {
  const [state, action] = useFormState<JSX.Element | null>(async () => {
    return chatWithAgent("hello!", []);
  }, null);
  return (
    <main>
      {state}
      <form action={action}>
        <button>Chat</button>
      </form>
    </main>
  );
}

Improvement in LlamaIndexTS v0.3.0

Better TypeScript support

We have made significant improvements to the type system to ensure that all code is thoroughly checked before it is published. This ongoing enhancement has already resulted in better module reliability and developer experience.