diff --git a/recipes/experimental/long-context/H2O/README.md b/recipes/experimental/long_context/H2O/README.md
similarity index 100%
rename from recipes/experimental/long-context/H2O/README.md
rename to recipes/experimental/long_context/H2O/README.md
diff --git a/recipes/experimental/long-context/H2O/data/summarization/cnn_dailymail.jsonl b/recipes/experimental/long_context/H2O/data/summarization/cnn_dailymail.jsonl
similarity index 100%
rename from recipes/experimental/long-context/H2O/data/summarization/cnn_dailymail.jsonl
rename to recipes/experimental/long_context/H2O/data/summarization/cnn_dailymail.jsonl
diff --git a/recipes/experimental/long-context/H2O/data/summarization/xsum.jsonl b/recipes/experimental/long_context/H2O/data/summarization/xsum.jsonl
similarity index 100%
rename from recipes/experimental/long-context/H2O/data/summarization/xsum.jsonl
rename to recipes/experimental/long_context/H2O/data/summarization/xsum.jsonl
diff --git a/recipes/experimental/long-context/H2O/requirements.txt b/recipes/experimental/long_context/H2O/requirements.txt
similarity index 100%
rename from recipes/experimental/long-context/H2O/requirements.txt
rename to recipes/experimental/long_context/H2O/requirements.txt
diff --git a/recipes/experimental/long-context/H2O/run_streaming.py b/recipes/experimental/long_context/H2O/run_streaming.py
similarity index 100%
rename from recipes/experimental/long-context/H2O/run_streaming.py
rename to recipes/experimental/long_context/H2O/run_streaming.py
diff --git a/recipes/experimental/long-context/H2O/run_summarization.py b/recipes/experimental/long_context/H2O/run_summarization.py
similarity index 100%
rename from recipes/experimental/long-context/H2O/run_summarization.py
rename to recipes/experimental/long_context/H2O/run_summarization.py
diff --git a/recipes/experimental/long-context/H2O/src/streaming.sh b/recipes/experimental/long_context/H2O/src/streaming.sh
similarity index 100%
rename from recipes/experimental/long-context/H2O/src/streaming.sh
rename to recipes/experimental/long_context/H2O/src/streaming.sh
diff --git a/recipes/experimental/long-context/H2O/utils/cache.py b/recipes/experimental/long_context/H2O/utils/cache.py
similarity index 100%
rename from recipes/experimental/long-context/H2O/utils/cache.py
rename to recipes/experimental/long_context/H2O/utils/cache.py
diff --git a/recipes/experimental/long-context/H2O/utils/llama.py b/recipes/experimental/long_context/H2O/utils/llama.py
similarity index 100%
rename from recipes/experimental/long-context/H2O/utils/llama.py
rename to recipes/experimental/long_context/H2O/utils/llama.py
diff --git a/recipes/experimental/long-context/H2O/utils/streaming.py b/recipes/experimental/long_context/H2O/utils/streaming.py
similarity index 100%
rename from recipes/experimental/long-context/H2O/utils/streaming.py
rename to recipes/experimental/long_context/H2O/utils/streaming.py
diff --git a/recipes/responsible_ai/README.md b/recipes/responsible_ai/README.md
index e268f85b5b68b77111d290ecc4feb1d99f7509f8..2ac05a37d6bf4aaccd9f4516653f9be438ee8c03 100644
--- a/recipes/responsible_ai/README.md
+++ b/recipes/responsible_ai/README.md
@@ -2,10 +2,10 @@
 
 Meta Llama Guard and Meta Llama Guard 2 are new models that provide input and output guardrails for LLM inference. For more details, please visit the main [repository](https://github.com/facebookresearch/PurpleLlama/tree/main/Llama-Guard2).
 
-**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B). 
+**Note** Please find the right model on HF side [here](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B).
 
 ### Running locally
 The [llama_guard](llama_guard) folder contains the inference script to run Meta Llama Guard locally. Add test prompts directly to the [inference script](llama_guard/inference.py) before running it.
 
 ### Running on the cloud
-The notebooks [Purple_Llama_Anyscale](Purple_Llama_Anyscale.ipynb) & [Purple_Llama_OctoAI](Purple_Llama_OctoAI.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
\ No newline at end of file
+The notebooks [Purple_Llama_Anyscale](purple_llama_anyscale.ipynb) & [Purple_Llama_OctoAI](purple_llama_octoai.ipynb) contain examples for running Meta Llama Guard on cloud hosted endpoints.
diff --git a/recipes/responsible_ai/CodeShieldUsageDemo.ipynb b/recipes/responsible_ai/code_shield_usage_demo.ipynb
similarity index 100%
rename from recipes/responsible_ai/CodeShieldUsageDemo.ipynb
rename to recipes/responsible_ai/code_shield_usage_demo.ipynb
diff --git a/recipes/use_cases/README.md b/recipes/use_cases/README.md
index 2ba913478ae4e83453dd81d2e049951b629b381a..1aa54765efcc399caa639e7bb4351046bd8265ec 100644
--- a/recipes/use_cases/README.md
+++ b/recipes/use_cases/README.md
@@ -13,11 +13,11 @@ This step-by-step tutorial shows how to use the [WhatsApp Business API](https://
 ## [Messenger Chatbot](./customerservice_chatbots/messenger_llama/messenger_llama3.md): Building a Llama 3 Enabled Messenger Chatbot
 This step-by-step tutorial shows how to use the [Messenger Platform](https://developers.facebook.com/docs/messenger-platform/overview) to build a Llama 3 enabled Messenger chatbot.
 
-### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_Chatbot_Example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_Chatbot_example/RAG_Chatbot_Example.ipynb))
+### RAG Chatbot Example (running [locally](./customerservice_chatbots/RAG_chatbot/RAG_chatbot_example.ipynb) or on [OctoAI](../3p_integration/octoai/RAG_chatbot_example/RAG_chatbot_example.ipynb))
 A complete example of how to build a Llama 3 chatbot hosted on your browser that can answer questions based on your own data using retrieval augmented generation (RAG). You can run Llama2 locally if you have a good enough GPU or on OctoAI if you follow the note [here](../README.md#octoai_note).
 
 ## [Sales Bot](./customerservice_chatbots/sales_bot/SalesBot.ipynb): Sales Bot with Llama3 - A Summarization and RAG Use Case
 An summarization + RAG use case built around the Amazon product review Kaggle dataset to build a helpful Music Store Sales Bot. The summarization and RAG are built on top of Llama models hosted on OctoAI, and the vector database is hosted on Weaviate Cloud Services.
 
-## [Media Generation](./MediaGen.ipynb): Building a Video Generation Pipeline with Llama3
+## [Media Generation](./mediagen.ipynb): Building a Video Generation Pipeline with Llama3
 This step-by-step tutorial shows how to use leverage Llama 3 to drive the generation of animated videos using SDXL and SVD. More specifically it relies on JSON formatting to produce a scene-by-scene story board of a recipe video. The user provides the name of a dish, then Llama 3 describes a step by step guide to reproduce the said dish. This step by step guide is brought to life with models like SDXL and SVD.