Skip to content
Snippets Groups Projects
Unverified Commit b4c6d509 authored by Emanuel Ferreira's avatar Emanuel Ferreira Committed by GitHub
Browse files

docs: available embeddings (#538)

parent 34cd57b6
Branches
Tags
No related merge requests found
label: "Embeddings"
position: 3
label: "Available Embeddings"
# HuggingFace
To use HuggingFace embeddings, you need to import `HuggingFaceEmbedding` from `llamaindex`.
```ts
import { HuggingFaceEmbedding, serviceContextFromDefaults } from "llamaindex";
const huggingFaceEmbeds = new HuggingFaceEmbedding();
const serviceContext = serviceContextFromDefaults({ embedModel: openaiEmbeds });
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
```
# MistralAI
To use MistralAI embeddings, you need to import `MistralAIEmbedding` from `llamaindex`.
```ts
import { MistralAIEmbedding, serviceContextFromDefaults } from "llamaindex";
const mistralEmbedModel = new MistralAIEmbedding({
apiKey: "<YOUR_API_KEY>",
});
const serviceContext = serviceContextFromDefaults({
embedModel: mistralEmbedModel,
});
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
```
# Ollama
To use Ollama embeddings, you need to import `Ollama` from `llamaindex`.
```ts
import { Ollama, serviceContextFromDefaults } from "llamaindex";
const ollamaEmbedModel = new Ollama();
const serviceContext = serviceContextFromDefaults({
embedModel: ollamaEmbedModel,
});
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
```
# OpenAI
To use OpenAI embeddings, you need to import `OpenAIEmbedding` from `llamaindex`.
```ts
import { OpenAIEmbedding, serviceContextFromDefaults } from "llamaindex";
const openaiEmbedModel = new OpenAIEmbedding();
const serviceContext = serviceContextFromDefaults({
embedModel: openaiEmbedModel,
});
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
```
# Together
To use together embeddings, you need to import `TogetherEmbedding` from `llamaindex`.
```ts
import { TogetherEmbedding, serviceContextFromDefaults } from "llamaindex";
const togetherEmbedModel = new TogetherEmbedding({
apiKey: "<YOUR_API_KEY>",
});
const serviceContext = serviceContextFromDefaults({
embedModel: togetherEmbedModel,
});
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
```
# Embedding
The embedding model in LlamaIndex is responsible for creating numerical representations of text. By default, LlamaIndex will use the `text-embedding-ada-002` model from OpenAI.
This can be explicitly set in the `ServiceContext` object.
```typescript
import { OpenAIEmbedding, serviceContextFromDefaults } from "llamaindex";
const openaiEmbeds = new OpenAIEmbedding();
const serviceContext = serviceContextFromDefaults({ embedModel: openaiEmbeds });
```
## Local Embedding
For local embeddings, you can use the [HuggingFace](./available_embeddings/huggingface.md) embedding model.
## API Reference
- [OpenAIEmbedding](../api/classes/OpenAIEmbedding.md)
- [ServiceContext](../api/interfaces//ServiceContext.md)
...@@ -28,6 +28,10 @@ export AZURE_OPENAI_ENDPOINT="<YOUR ENDPOINT, see https://learn.microsoft.com/en ...@@ -28,6 +28,10 @@ export AZURE_OPENAI_ENDPOINT="<YOUR ENDPOINT, see https://learn.microsoft.com/en
export AZURE_OPENAI_DEPLOYMENT="gpt-4" # or some other deployment name export AZURE_OPENAI_DEPLOYMENT="gpt-4" # or some other deployment name
``` ```
## Local LLM
For local LLMs, currently we recommend the use of [Ollama](./available_llms/ollama.md) LLM.
## API Reference ## API Reference
- [OpenAI](../api/classes/OpenAI.md) - [OpenAI](../api/classes/OpenAI.md)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment