From b963782137c7e8e1142701f3dddade3c9d8db6f4 Mon Sep 17 00:00:00 2001 From: Marcus Schiesser <mail@marcusschiesser.de> Date: Wed, 22 May 2024 21:54:27 +0800 Subject: [PATCH] docs: reorder installation steps (#869) --- .../retrieval_augmented_generation.mdx | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/apps/docs/docs/getting_started/starter_tutorial/retrieval_augmented_generation.mdx b/apps/docs/docs/getting_started/starter_tutorial/retrieval_augmented_generation.mdx index 599960c1f..a618c83e0 100644 --- a/apps/docs/docs/getting_started/starter_tutorial/retrieval_augmented_generation.mdx +++ b/apps/docs/docs/getting_started/starter_tutorial/retrieval_augmented_generation.mdx @@ -10,21 +10,19 @@ import TSConfigSource from "!!raw-loader!../../../../../examples/tsconfig.json"; One of the most common use-cases for LlamaIndex is Retrieval-Augmented Generation or RAG, in which your data is indexed and selectively retrieved to be given to an LLM as source material for responding to a query. You can learn more about the [concepts behind RAG](../concepts). -## Before you start +## Set up the project -Make sure you have installed LlamaIndex.TS and have an OpenAI key. If you haven't, check out the [installation](../installation) steps. - -You can use [other LLMs](../examples/other_llms) via their APIs; if you would prefer to use local models check out our [local LLM example](../../examples/local_llm). - -## Set up - -In a new folder: +In a new folder, run: ```bash npm2yarn npm init npm install -D typescript @types/node ``` +Then, check out the [installation](../installation) steps to install LlamaIndex.TS and prepare an OpenAI key. + +You can use [other LLMs](../examples/other_llms) via their APIs; if you would prefer to use local models check out our [local LLM example](../../examples/local_llm). + ## Run queries Create the file `example.ts`. This code will -- GitLab