Skip to content
Snippets Groups Projects
Commit b033d0fb authored by Yi Ding's avatar Yi Ding
Browse files

Merge branch 'main' of github.com:run-llama/LlamaIndexTS

parents 122ab88f 73c18876
Branches
Tags
No related merge requests found
......@@ -18,6 +18,10 @@ Create a list index and query it. This example also use the `LLMRetriever`, whic
Create a vector index and query it. The vector index will use embeddings to fetch the top k most relevant nodes. By default, the top k is 2.
## [Customized Vector Index](https://github.com/run-llama/LlamaIndexTS/blob/main/apps/simple/vectorIndexCustomize.ts)
Create a vector index and query it, while also configuring the the `LLM`, the `ServiceContext`, and the `similarity_top_k`.
## [OpenAI LLM](https://github.com/run-llama/LlamaIndexTS/blob/main/apps/simple/openai.ts)
Create an OpenAI LLM and directly use it for chat.
......
......@@ -8,7 +8,7 @@ A query engine wraps a `Retriever` and a `ResponseSynthesizer` into a pipeline,
```typescript
const queryEngine = index.asQueryEngine();
const response = queryEngine.query("query string");
const response = await queryEngine.query("query string");
```
## Sub Question Query Engine
......
......@@ -9,11 +9,11 @@ The embedding model in LlamaIndex is responsible for creating numerical represen
This can be explicitly set in the `ServiceContext` object.
```typescript
import { OpenAIEmbedding, ServiceContext } from "llamaindex";
import { OpenAIEmbedding, serviceContextFromDefaults } from "llamaindex";
const openaiEmbeds = new OpenAIEmbedding();
const serviceContext = new ServiceContext({ embedModel: openaiEmbeds });
const serviceContext = serviceContextFromDefaults({ embedModel: openaiEmbeds });
```
## API Reference
......
......@@ -9,14 +9,14 @@ The LLM is responsible for reading text and generating natural language response
The LLM can be explicitly set in the `ServiceContext` object.
```typescript
import { ChatGPTLLMPredictor, ServiceContext } from "llamaindex";
import { OpenAI, serviceContextFromDefaults } from "llamaindex";
const openaiLLM = new ChatGPTLLMPredictor({ model: "gpt-3.5-turbo" });
const openaiLLM = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0 });
const serviceContext = new ServiceContext({ llmPredictor: openaiLLM });
const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
```
## API Reference
- [ChatGPTLLMPredictor](../../api/classes/ChatGPTLLMPredictor.md)
- [OpenAI](../../api/classes/OpenAI.md)
- [ServiceContext](../../api/interfaces/ServiceContext.md)
\ No newline at end of file
......@@ -26,7 +26,7 @@ async function main() {
while (true) {
const query = await rl.question("Query: ");
const response = await chatEngine.chat(query);
console.log(response);
console.log(response.toString());
}
}
......
// @ts-ignore
import process from "node:process";
import { Configuration, OpenAIWrapper } from "llamaindex/src/llm/openai";
import { OpenAI } from "llamaindex";
(async () => {
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIWrapper(configuration);
const { data } = await openai.createChatCompletion({
model: "gpt-3.5-turbo-0613",
messages: [{ role: "user", content: "Hello, world!" }],
});
console.log(data);
console.log(data.choices[0].message);
const llm = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0.0 });
// complete api
const response1 = await llm.complete("How are you?");
console.log(response1.message.content);
// chat api
const response2 = await llm.chat([{ content: "Tell me a joke!", role: "user" }]);
console.log(response2.message.content);
})();
......@@ -22,5 +22,5 @@ import essay from "./essay";
"How was Paul Grahams life different before and after YC?"
);
console.log(response);
console.log(response.toString());
})();
import { Document, VectorStoreIndex, RetrieverQueryEngine } from "llamaindex";
import { Document, VectorStoreIndex, RetrieverQueryEngine, OpenAI, serviceContextFromDefaults } from "llamaindex";
import essay from "./essay";
// Customize retrieval and query args
async function main() {
const document = new Document({ text: essay });
const index = await VectorStoreIndex.fromDocuments([document]);
const serviceContext = serviceContextFromDefaults(
{ llm: new OpenAI({ model: "gpt-3.5-turbo", temperature: 0.0 }) }
);
const index = await VectorStoreIndex.fromDocuments([document], undefined, serviceContext);
const retriever = index.asRetriever();
retriever.similarityTopK = 5;
// TODO: cannot pass responseSynthesizer into retriever query engine
const queryEngine = new RetrieverQueryEngine(retriever);
const response = await queryEngine.query(
"What did the author do growing up?"
);
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment