Skip to content
Snippets Groups Projects
Unverified Commit c818e90c authored by Emanuel Ferreira's avatar Emanuel Ferreira Committed by GitHub
Browse files

refactor: restructure documentation (#420)


Co-authored-by: default avatarAlex Yang <himself65@outlook.com>
parent 570973b9
No related branches found
No related tags found
No related merge requests found
Showing
with 61 additions and 58 deletions
---
sidebar_position: 4
---
# End to End Examples
We include several end-to-end examples using LlamaIndex.TS in the repository
Check out the examples below or try them out and complete them in minutes with interactive Github Codespace tutorials provided by Dev-Docs [here](https://codespaces.new/team-dev-docs/lits-dev-docs-playground?devcontainer_path=.devcontainer%2Fjavascript_ltsquickstart%2Fdevcontainer.json):
## [Chat Engine](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/chatEngine.ts)
Read a file and chat about it with the LLM.
## [Vector Index](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndex.ts)
Create a vector index and query it. The vector index will use embeddings to fetch the top k most relevant nodes. By default, the top k is 2.
## [Summary Index](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/summaryIndex.ts)
Create a list index and query it. This example also use the `LLMRetriever`, which will use the LLM to select the best nodes to use when generating answer.
## [Save / Load an Index](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/storageContext.ts)
Create and load a vector index. Persistance to disk in LlamaIndex.TS happens automatically once a storage context object is created.
## [Customized Vector Index](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndexCustomize.ts)
Create a vector index and query it, while also configuring the `LLM`, the `ServiceContext`, and the `similarity_top_k`.
## [OpenAI LLM](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/openai.ts)
Create an OpenAI LLM and directly use it for chat.
## [Llama2 DeuceLLM](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/llamadeuce.ts)
Create a Llama-2 LLM and directly use it for chat.
## [SubQuestionQueryEngine](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/subquestion.ts)
Uses the `SubQuestionQueryEngine`, which breaks complex queries into multiple questions, and then aggreates a response across the answers to all sub-questions.
## [Low Level Modules](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/lowlevel.ts)
This example uses several low-level components, which removes the need for an actual query engine. These components can be used anywhere, in any application, or customized and sub-classed to meet your own needs.
## [JSON Entity Extraction](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/jsonExtract.ts)
Features OpenAI's chat API (using [`json_mode`](https://platform.openai.com/docs/guides/text-generation/json-mode)) to extract a JSON object from a sales call transcript.
label: Examples
position: 2
---
sidebar_position: 1
---
import CodeBlock from "@theme/CodeBlock";
import CodeSource from "!raw-loader!../../../../examples/chatEngine";
# Chat Engine
Chat Engine is a class that allows you to create a chatbot from a retriever. It is a wrapper around a retriever that allows you to chat with it in a conversational manner.
<CodeBlock language="ts">{CodeSource}</CodeBlock>
---
sidebar_position: 5
---
# More examples
You can check out more examples in the [examples](https://github.com/run-llama/LlamaIndexTS/tree/main/examples) folder of the repository.
---
sidebar_position: 4
---
import CodeBlock from "@theme/CodeBlock";
import CodeSource from "!raw-loader!../../../../examples/storageContext";
# Save/Load an Index
<CodeBlock language="ts">{CodeSource}</CodeBlock>
---
sidebar_position: 3
---
import CodeBlock from "@theme/CodeBlock";
import CodeSource from "!raw-loader!../../../../examples/summaryIndex";
# Summary Index
<CodeBlock language="ts">{CodeSource}</CodeBlock>
---
sidebar_position: 2
---
import CodeBlock from "@theme/CodeBlock";
import CodeSource from "!raw-loader!../../../../examples/vectorIndex";
# Vector Index
<CodeBlock language="ts">{CodeSource}</CodeBlock>
label: Getting Started
position: 1
......@@ -2,7 +2,7 @@
sidebar_position: 3
---
# High-Level Concepts
# Concepts
LlamaIndex.TS helps you build LLM-powered applications (e.g. Q&A, chatbot) over custom data.
......@@ -18,7 +18,7 @@ LlamaIndex uses a two stage method when using an LLM with your data:
1. **indexing stage**: preparing a knowledge base, and
2. **querying stage**: retrieving relevant context from the knowledge to assist the LLM in responding to a question
![](./_static/concepts/rag.jpg)
![](../_static/concepts/rag.jpg)
This process is also known as Retrieval Augmented Generation (RAG).
......@@ -30,7 +30,7 @@ Let's explore each stage in detail.
LlamaIndex.TS help you prepare the knowledge base with a suite of data connectors and indexes.
![](./_static/concepts/indexing.jpg)
![](../_static/concepts/indexing.jpg)
[**Data Loaders**](./modules/high_level/data_loader.md):
A data connector (i.e. `Reader`) ingest data from different data sources and data formats into a simple `Document` representation (text and simple metadata).
......@@ -56,7 +56,7 @@ LlamaIndex provides composable modules that help you build and integrate RAG pip
These building blocks can be customized to reflect ranking preferences, as well as composed to reason over multiple knowledge bases in a structured way.
![](./_static/concepts/querying.jpg)
![](../_static/concepts/querying.jpg)
#### Building Blocks
......
---
sidebar_position: 5
sidebar_position: 2
---
# Environments
......
---
sidebar_position: 1
sidebar_position: 0
---
# Installation and Setup
......
---
sidebar_position: 2
sidebar_position: 1
---
# Starter Tutorial
......
......@@ -39,7 +39,7 @@ For more complex applications, our lower-level APIs allow advanced users to cust
Our documentation includes [Installation Instructions](./installation.mdx) and a [Starter Tutorial](./starter.md) to build your first application.
Once you're up and running, [High-Level Concepts](./concepts.md) has an overview of LlamaIndex's modular architecture. For more hands-on practical examples, look through our [End-to-End Tutorials](./end_to_end.md).
Once you're up and running, [High-Level Concepts](./getting_started/concepts.md) has an overview of LlamaIndex's modular architecture. For more hands-on practical examples, look through our [End-to-End Tutorials](./end_to_end.md).
## 🗺️ Ecosystem
......
label: High-Level Modules
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment