Skip to content
Snippets Groups Projects
Unverified Commit fa40b365 authored by Marcus Schiesser's avatar Marcus Schiesser Committed by GitHub
Browse files

docs: cleanup (#1745)

parent da8068e9
No related branches found
No related tags found
No related merge requests found
Showing
with 27 additions and 228 deletions
---
title: Agents
---
A built-in agent that can take decisions and reasoning based on the tools provided to it.
## OpenAI Agent
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/agent/openai";
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Gemini Agent
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSourceGemini from "!raw-loader!../../../../../../../examples/gemini/agent.ts";
## Installation
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/google
```
```shell tab="yarn"
yarn add llamaindex @llamaindex/google
```
```shell tab="pnpm"
pnpm add llamaindex @llamaindex/google
```
</Tabs>
## Source
<DynamicCodeBlock lang="ts" code={CodeSourceGemini} />
---
title: Chat Engine
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/chatEngine";
Chat Engine is a class that allows you to create a chatbot from a retriever. It is a wrapper around a retriever that allows you to chat with it in a conversational manner.
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Context-Aware Agent
---
The Context-Aware Agent enhances the capabilities of standard LLM agents by incorporating relevant context from a retriever for each query. This allows the agent to provide more informed and specific responses based on the available information.
## Usage
Here's a simple example of how to use the Context-Aware Agent:
```typescript
import {
Document,
VectorStoreIndex,
} from "llamaindex";
import { OpenAI, OpenAIContextAwareAgent } from "@llamaindex/openai";
async function createContextAwareAgent() {
// Create and index some documents
const documents = [
new Document({
text: "LlamaIndex is a data framework for LLM applications.",
id_: "doc1",
}),
new Document({
text: "The Eiffel Tower is located in Paris, France.",
id_: "doc2",
}),
];
const index = await VectorStoreIndex.fromDocuments(documents);
const retriever = index.asRetriever({ similarityTopK: 1 });
// Create the Context-Aware Agent
const agent = new OpenAIContextAwareAgent({
llm: new OpenAI({ model: "gpt-3.5-turbo" }),
contextRetriever: retriever,
});
// Use the agent to answer queries
const response = await agent.chat({
message: "What is LlamaIndex used for?",
});
console.log("Agent Response:", response.response);
}
createContextAwareAgent().catch(console.error);
```
In this example, the Context-Aware Agent uses the retriever to fetch relevant context for each query, allowing it to provide more accurate and informed responses based on the indexed documents.
## Key Components
- `contextRetriever`: A retriever (e.g., from a VectorStoreIndex) that fetches relevant documents or passages for each query.
## Available Context-Aware Agents
- `OpenAIContextAwareAgent`: A context-aware agent using OpenAI's models.
{
"title": "Examples",
"pages": [
"more_examples",
"chat_engine",
"vector_index",
"summary_index",
"save_load_index",
"context_aware_agent",
"agent",
"agent_gemini",
"local_llm",
"other_llms"
]
}
---
title: Using other LLM APIs
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/mistral";
By default LlamaIndex.TS uses OpenAI's LLMs and embedding models, but we support [lots of other LLMs](../modules/llms) including models from Mistral (Mistral, Mixtral), Anthropic (Claude) and Google (Gemini).
If you don't want to use an API at all you can [run a local model](./local_llm).
This example runs you through the process of setting up a Mistral model:
## Installation
import { Tab, Tabs } from "fumadocs-ui/components/tabs";
<Tabs groupId="install" items={["npm", "yarn", "pnpm"]} persist>
```shell tab="npm"
npm install llamaindex @llamaindex/mistral
```
```shell tab="yarn"
yarn add llamaindex @llamaindex/mistral
```
```shell tab="pnpm"
pnpm add llamaindex @llamaindex/mistral
```
</Tabs>
## Using another LLM
You can specify what LLM LlamaIndex.TS will use on the `Settings` object, like this:
```typescript
import { MistralAI } from "@llamaindex/mistral";
import { Settings } from "llamaindex";
Settings.llm = new MistralAI({
model: "mistral-tiny",
apiKey: "<YOUR_API_KEY>",
});
```
You can see examples of other APIs we support by checking out "Available LLMs" in the sidebar of our [LLMs section](../modules/llms).
## Using another embedding model
A frequent gotcha when trying to use a different API as your LLM is that LlamaIndex will also by default index and embed your data using OpenAI's embeddings. To completely switch away from OpenAI you will need to set your embedding model as well, for example:
```typescript
import { MistralAIEmbedding } from "@llamaindex/mistral";
import { Settings } from "llamaindex";
Settings.embedModel = new MistralAIEmbedding();
```
We support [many different embeddings](../modules/embeddings).
## Full example
This example uses Mistral's `mistral-tiny` model as the LLM and Mistral for embeddings as well.
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Save/Load an Index
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/storageContext";
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Summary Index
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/summaryIndex";
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Vector Index
---
import { DynamicCodeBlock } from 'fumadocs-ui/components/dynamic-codeblock';
import CodeSource from "!raw-loader!../../../../../../../examples/vectorIndex";
<DynamicCodeBlock lang="ts" code={CodeSource} />
---
title: Chatbot tutorial
title: Create-Llama
---
Once you've mastered basic [retrieval-augment generation](retrieval_augmented_generation) you may want to create an interface to chat with your data. You can do this step-by-step, but we recommend getting started quickly using `create-llama`.
## Using create-llama
`create-llama` is a powerful but easy to use command-line tool that generates a working, full-stack web application that allows you to chat with your data. You can learn more about it on [the `create-llama` README page](https://www.npmjs.com/package/create-llama).
Run it once and it will ask you a series of questions about the kind of application you want to generate. Then you can customize your application to suit your use-case. To get started, run:
......
---
title: See all examples
title: Code examples
---
Our GitHub repository has a wealth of examples to explore and try out. You can check out our [examples folder](https://github.com/run-llama/LlamaIndexTS/tree/main/examples) to see them all at once, or browse the pages in this section for some selected highlights.
## Check out all examples
## Use examples locally
It may be useful to check out all the examples at once so you can try them out locally. To do this into a folder called `my-new-project`, run these commands:
......@@ -19,3 +19,14 @@ Then you can run any example in the folder with `tsx`, e.g.:
```bash npm2yarn
npx tsx ./vectorIndex.ts
```
## Try examples online
You can also try the examples online using StackBlitz:
<iframe
className="w-full h-[440px]"
aria-label="LlamaIndex.TS Examples"
aria-description="This is a list of examples for LlamaIndex.TS."
src="https://stackblitz.com/github/run-llama/LlamaIndexTS/tree/main/examples?file=README.md"
/>
\ No newline at end of file
---
title: Choose Framework
title: Frameworks
description: We support multiple JS runtime and frameworks, bundlers.
---
import {
......
{
"title": "Setup",
"title": "Framework",
"description": "The setup guide",
"defaultOpen": true,
"pages": ["index", "next", "node", "typescript", "vite", "cloudflare"]
"pages": ["node", "typescript", "next", "vite", "cloudflare"]
}
......@@ -42,7 +42,7 @@ By the default, we are using `js-tiktoken` for tokenization. You can install `gp
```
</Tabs>
> Note: This only works for Node.js
**Note**: This only works for Node.js
## TypeScript support
......
......@@ -37,20 +37,20 @@ In most cases, you'll also need an LLM package to use LlamaIndex. For example, t
```
</Tabs>
Go to [Using other LLM APIs](/docs/llamaindex/examples/other_llms) to find out how to use other LLMs.
Go to [LLM APIs](/docs/llamaindex/modules/llms) to find out how to use other LLMs.
## What's next?
<Cards>
<Card
title="I want to try LlamaIndex.TS"
description="Learn how to use LlamaIndex.TS with different JS runtime and frameworks."
href="/docs/llamaindex/getting_started/setup"
title="Learn LlamaIndex.TS"
description="Learn how to use LlamaIndex.TS by starting with one of our tutorials."
href="/docs/llamaindex/tutorials/rag"
/>
<Card
title="Show me code examples"
description="Explore code examples using LlamaIndex.TS."
href="https://stackblitz.com/github/run-llama/LlamaIndexTS/tree/main/examples?file=README.md"
href="/docs/llamaindex/getting_started/examples"
/>
</Cards>
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment