Skip to content
Snippets Groups Projects
Commit 73c18876 authored by Logan Markewich's avatar Logan Markewich
Browse files

fix some docs usage

parent 0fbf7b4a
No related branches found
No related tags found
No related merge requests found
...@@ -8,7 +8,7 @@ A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline, ...@@ -8,7 +8,7 @@ A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline,
```typescript ```typescript
const queryEngine = index.asQueryEngine(); const queryEngine = index.asQueryEngine();
const response = queryEngine.query("query string"); const response = await queryEngine.query("query string");
``` ```
## Sub Question Query Engine ## Sub Question Query Engine
......
...@@ -9,11 +9,11 @@ The embedding model in LlamaIndex is responsible for creating numerical represen ...@@ -9,11 +9,11 @@ The embedding model in LlamaIndex is responsible for creating numerical represen
This can be explicitly set in the `ServiceContext` object. This can be explicitly set in the `ServiceContext` object.
```typescript ```typescript
import { OpenAIEmbedding, ServiceContext } from "llamaindex"; import { OpenAIEmbedding, serviceContextFromDefaults } from "llamaindex";
const openaiEmbeds = new OpenAIEmbedding(); const openaiEmbeds = new OpenAIEmbedding();
const serviceContext = new ServiceContext({ embedModel: openaiEmbeds }); const serviceContext = serviceContextFromDefaults({ embedModel: openaiEmbeds });
``` ```
## API Reference ## API Reference
......
...@@ -9,14 +9,14 @@ The LLM is responsible for reading text and generating natural language response ...@@ -9,14 +9,14 @@ The LLM is responsible for reading text and generating natural language response
The LLM can be explicitly set in the `ServiceContext` object. The LLM can be explicitly set in the `ServiceContext` object.
```typescript ```typescript
import { ChatGPTLLMPredictor, ServiceContext } from "llamaindex"; import { OpenAI, serviceContextFromDefaults } from "llamaindex";
const openaiLLM = new ChatGPTLLMPredictor({ model: "gpt-3.5-turbo" }); const openaiLLM = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0 });
const serviceContext = new ServiceContext({ llmPredictor: openaiLLM }); const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
``` ```
## API Reference ## API Reference
- [ChatGPTLLMPredictor](../../api/classes/ChatGPTLLMPredictor.md) - [OpenAI](../../api/classes/OpenAI.md)
- [ServiceContext](../../api/interfaces/ServiceContext.md) - [ServiceContext](../../api/interfaces/ServiceContext.md)
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment