From 92e661e4d9b9bbd34d80754a656e393614ff0fad Mon Sep 17 00:00:00 2001
From: Suraj Subramanian <5676233+subramen@users.noreply.github.com>
Date: Fri, 28 Jun 2024 13:42:44 -0400
Subject: [PATCH] Update renamed links

---
 recipes/quickstart/README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/recipes/quickstart/README.md b/recipes/quickstart/README.md
index 8fcea959..f9010344 100644
--- a/recipes/quickstart/README.md
+++ b/recipes/quickstart/README.md
@@ -3,8 +3,8 @@
 If you are new to developing with Meta Llama models, this is where you should start. This folder contains introductory-level notebooks across different techniques relating to Meta Llama.
 
 * The [](./Running_Llama3_Anywhere/) notebooks demonstrate how to run Llama inference across Linux, Mac and Windows platforms using the appropriate tooling.
-* The [](./prompt_engineering/Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
-* The [](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [](../3p_integrations/vllm/) and [](../3p_integrations/tgi/) for hosting Llama on open-source model servers.
+* The [](./Prompt_Engineering_with_Llama_3.ipynb) notebook showcases the various ways to elicit appropriate outputs from Llama. Take this notebook for a spin to get a feel for how Llama responds to different inputs and generation parameters.
+* The [](./inference/) folder contains scripts to deploy Llama for inference on server and mobile. See also [](../3p_integration/vllm/) and [](../3p_integration/tgi/) for hosting Llama on open-source model servers.
 * The [](./RAG/) folder contains a simple Retrieval-Augmented Generation application using Llama 3.
 * The [](./finetuning/) folder contains resources to help you finetune Llama 3 on your custom datasets, for both single- and multi-GPU setups. The scripts use the native llama-recipes finetuning code found in [](../../src/llama_recipes/finetuning.py) which supports these features:
 
-- 
GitLab