diff --git a/3p-integrations/togetherai/README.md b/3p-integrations/togetherai/README.md
index 296dc6c95f0b16495d0b303370b6a4447266e252..bf409ba5eb7d53f5de8709a4b21d26d594383cd9 100644
--- a/3p-integrations/togetherai/README.md
+++ b/3p-integrations/togetherai/README.md
@@ -12,7 +12,7 @@ While the code examples are primarily written in Python/JS, the concepts can be
 
 | Cookbook | Description | Open |
 | -------- | ----------- | ---- |
-| [MultiModal RAG with Nvidia Investor Slide Deck](https://github.com/meta-llama/llama-recipes/blob/main/recipes/3p_integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) | Multimodal RAG using Nvidia investor slides. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) [![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/youtubebadge.svg)](https://youtu.be/IluARWPYAUc?si=gG90hqpboQgNOAYG)|
+| [MultiModal RAG with Nvidia Investor Slide Deck](https://github.com/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) | Multimodal RAG using Nvidia investor slides. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/multimodal_RAG_with_nvidia_investor_slide_deck.ipynb) [![](https://uohmivykqgnnbiouffke.supabase.co/storage/v1/object/public/landingpage/youtubebadge.svg)](https://youtu.be/IluARWPYAUc?si=gG90hqpboQgNOAYG)|
 | [Llama Contextual RAG](./llama_contextual_RAG.ipynb) | Implementation of Contextual Retrieval using Llama models. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/llama_contextual_RAG.ipynb) |
 | [Llama PDF to podcast](./pdf_to_podcast_using_llama_on_together.ipynb) | Generate a podcast from PDF content using Llama. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/pdf_to_podcast_using_llama_on_together.ipynb) |
 | [Knowledge Graphs with Structured Outputs](./knowledge_graphs_with_structured_outputs.ipynb) | Get Llama to generate knowledge graphs. | [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/meta-llama/llama-cookbook/blob/main/3p-integrations/togetherai/knowledge_graphs_with_structured_outputs.ipynb) |
diff --git a/end-to-end-use-cases/Multi-Modal-RAG/README.md b/end-to-end-use-cases/Multi-Modal-RAG/README.md
index c941ebf4b7ded668cd9837f2ea4aa62c518e1f02..aa45d5059fe14ab02130ccc1ac9a0a9408de6522 100644
--- a/end-to-end-use-cases/Multi-Modal-RAG/README.md
+++ b/end-to-end-use-cases/Multi-Modal-RAG/README.md
@@ -13,7 +13,7 @@ This is a complete workshop on how to label images using the new Llama 3.2-Visio
 Before we start:
 
 1. Please grab your HF CLI Token from [here](https://huggingface.co/settings/tokens)
-2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo` (Remember to thank the original author by upvoting [Kaggle Dataset](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full))
+2. Git clone [this dataset](https://huggingface.co/datasets/Sanyam/MM-Demo) inside the Multi-Modal-RAG folder: `git clone https://huggingface.co/datasets/Sanyam/MM-Demo` (Remember to thank the original author by up voting [Kaggle Dataset](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full))
 3. Make sure you grab a together.ai token [here](https://www.together.ai)
 
 ## Detailed Outline for running:
@@ -107,7 +107,7 @@ Note: We can further improve the description prompt. You will notice sometimes t
 
 Credit and Thanks to List of models and resources used in the showcase:
 
-Firstly, thanks to the author here for providing this dataset on which we base our exercise []()
+Firstly, thanks to the author here for providing this dataset on which we base our exercise [here](https://www.kaggle.com/datasets/agrigorev/clothing-dataset-full)
 
 - [Llama-3.2-11B-Vision-Instruct Model](https://www.llama.com/docs/how-to-guides/vision-capabilities/)
 - [Lance-db for vector database](https://lancedb.com)