Skip to content
Snippets Groups Projects
Commit 0fbf7b4a authored by Logan Markewich's avatar Logan Markewich
Browse files

update examples and docs

parent f9394ebb
Branches
Tags
No related merge requests found
...@@ -18,6 +18,10 @@ Create a list index and query it. This example also use the `LLMRetriever`, whic ...@@ -18,6 +18,10 @@ Create a list index and query it. This example also use the `LLMRetriever`, whic
Create a vector index and query it. The vector index will use embeddings to fetch the top k most relevant nodes. By default, the top k is 2. Create a vector index and query it. The vector index will use embeddings to fetch the top k most relevant nodes. By default, the top k is 2.
## [Customized Vector Index](https://github.com/run-llama/LlamaIndexTS/blob/main/apps/simple/vectorIndexCustomize.ts)
Create a vector index and query it, while also configuring the the `LLM`, the `ServiceContext`, and the `similarity_top_k`.
## [OpenAI LLM](https://github.com/run-llama/LlamaIndexTS/blob/main/apps/simple/openai.ts) ## [OpenAI LLM](https://github.com/run-llama/LlamaIndexTS/blob/main/apps/simple/openai.ts)
Create an OpenAI LLM and directly use it for chat. Create an OpenAI LLM and directly use it for chat.
......
...@@ -26,7 +26,7 @@ async function main() { ...@@ -26,7 +26,7 @@ async function main() {
while (true) { while (true) {
const query = await rl.question("Query: "); const query = await rl.question("Query: ");
const response = await chatEngine.chat(query); const response = await chatEngine.chat(query);
console.log(response); console.log(response.toString());
} }
} }
......
// @ts-ignore import { OpenAI } from "llamaindex";
import process from "node:process";
import { Configuration, OpenAIWrapper } from "llamaindex/src/llm/openai";
(async () => { (async () => {
const configuration = new Configuration({ const llm = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0.0 });
apiKey: process.env.OPENAI_API_KEY,
}); // complete api
const response1 = await llm.complete("How are you?");
const openai = new OpenAIWrapper(configuration); console.log(response1.message.content);
const { data } = await openai.createChatCompletion({ // chat api
model: "gpt-3.5-turbo-0613", const response2 = await llm.chat([{ content: "Tell me a joke!", role: "user" }]);
messages: [{ role: "user", content: "Hello, world!" }], console.log(response2.message.content);
});
console.log(data);
console.log(data.choices[0].message);
})(); })();
...@@ -22,5 +22,5 @@ import essay from "./essay"; ...@@ -22,5 +22,5 @@ import essay from "./essay";
"How was Paul Grahams life different before and after YC?" "How was Paul Grahams life different before and after YC?"
); );
console.log(response); console.log(response.toString());
})(); })();
import { Document, VectorStoreIndex, RetrieverQueryEngine } from "llamaindex"; import { Document, VectorStoreIndex, RetrieverQueryEngine, OpenAI, serviceContextFromDefaults } from "llamaindex";
import essay from "./essay"; import essay from "./essay";
// Customize retrieval and query args // Customize retrieval and query args
async function main() { async function main() {
const document = new Document({ text: essay }); const document = new Document({ text: essay });
const index = await VectorStoreIndex.fromDocuments([document]);
const serviceContext = serviceContextFromDefaults(
{ llm: new OpenAI({ model: "gpt-3.5-turbo", temperature: 0.0 }) }
);
const index = await VectorStoreIndex.fromDocuments([document], undefined, serviceContext);
const retriever = index.asRetriever(); const retriever = index.asRetriever();
retriever.similarityTopK = 5; retriever.similarityTopK = 5;
// TODO: cannot pass responseSynthesizer into retriever query engine // TODO: cannot pass responseSynthesizer into retriever query engine
const queryEngine = new RetrieverQueryEngine(retriever); const queryEngine = new RetrieverQueryEngine(retriever);
const response = await queryEngine.query( const response = await queryEngine.query(
"What did the author do growing up?" "What did the author do growing up?"
); );
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment