@@ -10,21 +10,19 @@ import TSConfigSource from "!!raw-loader!../../../../../examples/tsconfig.json";
...
@@ -10,21 +10,19 @@ import TSConfigSource from "!!raw-loader!../../../../../examples/tsconfig.json";
One of the most common use-cases for LlamaIndex is Retrieval-Augmented Generation or RAG, in which your data is indexed and selectively retrieved to be given to an LLM as source material for responding to a query. You can learn more about the [concepts behind RAG](../concepts).
One of the most common use-cases for LlamaIndex is Retrieval-Augmented Generation or RAG, in which your data is indexed and selectively retrieved to be given to an LLM as source material for responding to a query. You can learn more about the [concepts behind RAG](../concepts).
## Before you start
## Set up the project
Make sure you have installed LlamaIndex.TS and have an OpenAI key. If you haven't, check out the [installation](../installation) steps.
In a new folder, run:
You can use [other LLMs](../examples/other_llms) via their APIs; if you would prefer to use local models check out our [local LLM example](../../examples/local_llm).
## Set up
In a new folder:
```bash npm2yarn
```bash npm2yarn
npm init
npm init
npm install -D typescript @types/node
npm install -D typescript @types/node
```
```
Then, check out the [installation](../installation) steps to install LlamaIndex.TS and prepare an OpenAI key.
You can use [other LLMs](../examples/other_llms) via their APIs; if you would prefer to use local models check out our [local LLM example](../../examples/local_llm).