diff --git a/.changeset/olive-foxes-watch.md b/.changeset/olive-foxes-watch.md new file mode 100644 index 0000000000000000000000000000000000000000..3ae39d5bf50593dae7dddc3f9fe4f839913d04eb --- /dev/null +++ b/.changeset/olive-foxes-watch.md @@ -0,0 +1,5 @@ +--- +"@llamaindex/doc": patch +--- + +Fix internal links between chapters diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/1_setup.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/1_setup.mdx index 770fb26c87923d83622af607c134f3e7b23cfb57..022308116dbb44904b412c7fb3e2d5c4de80b6c4 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/1_setup.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/1_setup.mdx @@ -20,7 +20,7 @@ npm install llamaindex ## Choose your model -By default we'll be using OpenAI with GPT-4, as it's a powerful model and easy to get started with. If you'd prefer to run a local model, see [using a local model](local_model). +By default we'll be using OpenAI with GPT-4, as it's a powerful model and easy to get started with. If you'd prefer to run a local model, see [using a local model](3_local_model). ## Get an OpenAI API key @@ -36,4 +36,4 @@ We'll use `dotenv` to pull the API key out of that .env file, so also run: npm install dotenv ``` -Now you're ready to [create your agent](create_agent). +Now you're ready to [create your agent](2_create_agent). diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/2_create_agent.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/2_create_agent.mdx index 909250a2b2d6e5edbfb5957a487374daebd73ad0..860af45739c36e773e47dda8d0fb46b6d2afe3a5 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/2_create_agent.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/2_create_agent.mdx @@ -177,5 +177,5 @@ The second piece of output is the response from the LLM itself, where the `messa Great! We've built an agent with tool use! Next you can: - [See the full code](https://github.com/run-llama/ts-agents/blob/main/1_agent/agent.ts) -- [Switch to a local LLM](local_model) -- Move on to [add Retrieval-Augmented Generation to your agent](agentic_rag) +- [Switch to a local LLM](3_local_model) +- Move on to [add Retrieval-Augmented Generation to your agent](4_agentic_rag) diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/3_local_model.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/3_local_model.mdx index 0c649dfe374630086f50f90e5f61bf213c88bee7..0224ff51f9604f18f5a820c0c656f6b65fa21212 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/3_local_model.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/3_local_model.mdx @@ -89,4 +89,4 @@ You can use a ReActAgent instead of an OpenAIAgent in any of the further example ### Next steps -Now you've got a local agent, you can [add Retrieval-Augmented Generation to your agent](agentic_rag). +Now you've got a local agent, you can [add Retrieval-Augmented Generation to your agent](4_agentic_rag). diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/4_agentic_rag.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/4_agentic_rag.mdx index f5f4432afb94d74bdcabd4b7ef4e1c4cd580a90e..465f38299a63c04cf179b95bdf2d9c4526e626b9 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/4_agentic_rag.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/4_agentic_rag.mdx @@ -153,4 +153,4 @@ The `OpenAIContextAwareAgent` approach simplifies the setup by allowing you to d On the other hand, using the `QueryEngineTool` offers more flexibility and power. This method allows for customization in how queries are constructed and executed, enabling you to query data from various storages and process them in different ways. However, this added flexibility comes with increased complexity and response time due to the separate tool call and queryEngine generating tool output by LLM that is then passed to the agent. -So now we have an agent that can index complicated documents and answer questions about them. Let's [combine our math agent and our RAG agent](rag_and_tools)! +So now we have an agent that can index complicated documents and answer questions about them. Let's [combine our math agent and our RAG agent](5_rag_and_tools)! diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/5_rag_and_tools.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/5_rag_and_tools.mdx index 0f95857d2fd4528d11edf1eb1baf11cce61a2bc6..b68a939a3e62105bbd715bcd35bf8948ec1a61db 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/5_rag_and_tools.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/5_rag_and_tools.mdx @@ -127,4 +127,4 @@ In the final tool call, it used the `sumNumbers` function to add the two budgets } ``` -Great! Now let's improve accuracy by improving our parsing with [LlamaParse](llamaparse). +Great! Now let's improve accuracy by improving our parsing with [LlamaParse](6_llamaparse). diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/6_llamaparse.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/6_llamaparse.mdx index dc0047addfad3830830a61449a387a74d327d7f1..1eb845b954802d93ab4363bf07fd7bb1fa32d8dd 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/6_llamaparse.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/6_llamaparse.mdx @@ -17,4 +17,4 @@ const documents = await reader.loadData("../data/sf_budget_2023_2024.pdf"); Now you will be able to ask more complicated questions of the same PDF and get better results. You can find this code [in our repo](https://github.com/run-llama/ts-agents/blob/main/4_llamaparse/agent.ts). -Next up, let's persist our embedded data so we don't have to re-parse every time by [using a vector store](qdrant). +Next up, let's persist our embedded data so we don't have to re-parse every time by [using a vector store](7_qdrant). diff --git a/apps/next/src/content/docs/llamaindex/guide/agents/7_qdrant.mdx b/apps/next/src/content/docs/llamaindex/guide/agents/7_qdrant.mdx index d6154c580db4e18bafd1903a1c0c1ffd47dbdbdf..eb3c4500530d2e6ffd8b04adc900c8b63212bbaf 100644 --- a/apps/next/src/content/docs/llamaindex/guide/agents/7_qdrant.mdx +++ b/apps/next/src/content/docs/llamaindex/guide/agents/7_qdrant.mdx @@ -65,13 +65,13 @@ Since parsing a PDF can be slow, especially a large one, using the pre-parsed ch In this guide you've learned how to -- [Create an agent](create_agent) +- [Create an agent](2_create_agent) - Use remote LLMs like GPT-4 -- [Use local LLMs like Mixtral](local_model) -- [Create a RAG query engine](agentic_rag) -- [Turn functions and query engines into agent tools](rag_and_tools) +- [Use local LLMs like Mixtral](3_local_model) +- [Create a RAG query engine](4_agentic_rag) +- [Turn functions and query engines into agent tools](5_rag_and_tools) - Combine those tools -- [Enhance your parsing with LlamaParse](llamaparse) +- [Enhance your parsing with LlamaParse](6_llamaparse) - Persist your data in a vector store The next steps are up to you! Try creating more complex functions and query engines, and set your agent loose on the world.