Skip to content
Snippets Groups Projects
Unverified Commit 0188cf3b authored by Fabian Wimmer's avatar Fabian Wimmer Committed by GitHub
Browse files

docs: fix typos, add API references (#1161)

parent e0b4f9c0
Branches
Tags
No related merge requests found
Showing with 32 additions and 15 deletions
......@@ -21,7 +21,7 @@ LlamaIndex.TS handles several major use cases:
- **Structured Data Extraction**: turning complex, unstructured and semi-structured data into uniform, programmatically accessible formats.
- **Retrieval-Augmented Generation (RAG)**: answering queries across your internal data by providing LLMs with up-to-date, semantically relevant context including Question and Answer systems and chat bots.
- **Autonomous Agents**: building software that is capable of intelligently selecting and using tools to accomplish tasks in an interative, unsupervised manner.
- **Autonomous Agents**: building software that is capable of intelligently selecting and using tools to accomplish tasks in an interactive, unsupervised manner.
## 👨‍👩‍👧‍👦 Who is LlamaIndex for?
......
......@@ -27,3 +27,4 @@ for await (const chunk of stream) {
- [ContextChatEngine](../api/classes/ContextChatEngine.md)
- [CondenseQuestionChatEngine](../api/classes/ContextChatEngine.md)
- [SimpleChatEngine](../api/classes/SimpleChatEngine.md)
......@@ -21,3 +21,4 @@ const index = await VectorStoreIndex.fromDocuments([document]);
- [SummaryIndex](../api/classes/SummaryIndex.md)
- [VectorStoreIndex](../api/classes/VectorStoreIndex.md)
- [KeywordTableIndex](../api/classes/KeywordTableIndex.md)
......@@ -98,3 +98,7 @@ Use the `embedDocuments` method to generate embeddings for the texts.
const result = await embeddings.embedDocuments(texts);
console.log(result); // Perfectly customized embeddings, ready to serve.
```
## API Reference
- [MixedbreadAIEmbeddings](../../../api/classes/MixedbreadAIEmbeddings.md)
......@@ -2,7 +2,7 @@
## Concept
Evaluation and benchmarking are crucial concepts in LLM development. To improve the perfomance of an LLM app (RAG, agents) you must have a way to measure it.
Evaluation and benchmarking are crucial concepts in LLM development. To improve the performance of an LLM app (RAG, agents) you must have a way to measure it.
LlamaIndex offers key modules to measure the quality of generated results. We also offer key modules to measure retrieval quality.
......
......@@ -36,7 +36,7 @@ main().catch(console.error);
You can implement any transformation yourself by implementing the `TransformComponent`.
The following custom transformation will remove any special characters or punctutation in text.
The following custom transformation will remove any special characters or punctuation in text.
```ts
import { TransformComponent, TextNode } from "llamaindex";
......@@ -75,3 +75,7 @@ async function main() {
main().catch(console.error);
```
## API Reference
- [TransformComponent](../../api/classes/TransformComponent.md)
# DeepSeek LLM
[DeepSeek Platform](https://platform.deepseek.com/)
## Usage
```ts
......@@ -45,6 +47,6 @@ Currently does not support function calling.
[Currently does not support json-output param while still is very good at json generating.](https://platform.deepseek.com/api-docs/faq#does-your-api-support-json-output)
## API platform
## API Reference
- [DeepSeek platform](https://platform.deepseek.com/)
- [DeepSeekLLM](../../../api/classes/DeepSeekLLM.md)
......@@ -163,3 +163,7 @@ Use the `rerank` method to reorder the documents based on the query.
const result = await reranker.rerank(documents, query);
console.log(result); // Perfectly customized results, ready to serve.
```
## API Reference
- [MixedbreadAIReranker](../../api/classes/MixedbreadAIReranker.md)
# QueryEngine
A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline, that will use the query string to fetech nodes and then send them to the LLM to generate a response.
A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline, that will use the query string to fetch nodes and then send them to the LLM to generate a response.
```typescript
const queryEngine = index.asQueryEngine();
......
......@@ -4,7 +4,14 @@ sidebar_position: 5
# Retriever
A retriever in LlamaIndex is what is used to fetch `Node`s from an index using a query string. Aa `VectorIndexRetriever` will fetch the top-k most similar nodes. Meanwhile, a `SummaryIndexRetriever` will fetch all nodes no matter the query.
A retriever in LlamaIndex is what is used to fetch `Node`s from an index using a query string.
- [VectorIndexRetriever](../api/classes/VectorIndexRetriever.md) will fetch the top-k most similar nodes. Ideal for dense retrieval to find most relevant nodes.
- [SummaryIndexRetriever](../api/classes/SummaryIndexRetriever.md) will fetch all nodes no matter the query. Ideal when complete context is necessary, e.g. analyzing large datasets.
- [SummaryIndexLLMRetriever](../api/classes/SummaryIndexLLMRetriever.md) utilizes an LLM to score and filter nodes based on relevancy to the query.
- [KeywordTableLLMRetriever](../api/classes/KeywordTableLLMRetriever.md) uses an LLM to extract keywords from the query and retrieve relevant nodes based on keyword matches.
- [KeywordTableSimpleRetriever](../api/classes/KeywordTableSimpleRetriever.md) uses a basic frequency-based approach to extract keywords and retrieve nodes.
- [KeywordTableRAKERetriever](../api/classes/KeywordTableRAKERetriever.md) uses the RAKE (Rapid Automatic Keyword Extraction) algorithm to extract keywords from the query, focusing on co-occurrence and context for keyword-based retrieval.
```typescript
const retriever = vectorIndex.asRetriever({
......@@ -14,9 +21,3 @@ const retriever = vectorIndex.asRetriever({
// Fetch nodes!
const nodesWithScore = await retriever.retrieve({ query: "query string" });
```
## API Reference
- [SummaryIndexRetriever](../api/classes/SummaryIndexRetriever.md)
- [SummaryIndexLLMRetriever](../api/classes/SummaryIndexLLMRetriever.md)
- [VectorIndexRetriever](../api/classes/VectorIndexRetriever.md)
......@@ -129,9 +129,9 @@ export class MongoDBAtlasVectorSearch
* Function to determine the number of candidates to retrieve for a given query.
* In case your results are not good, you might tune this value.
*
* {@link https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/|Run Vector Search Queries}
* {@link https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-stage/ | Run Vector Search Queries}
*
* {@link https://arxiv.org/abs/1603.09320|Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs}
* {@link https://arxiv.org/abs/1603.09320 | Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs}
*
*
* Default: query.similarityTopK * 10
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment