diff --git a/examples/examples_with_aws/Prompt_Engineering_with_Llama_2_On_Amazon_Bedrock.ipynb b/examples/examples_with_aws/Prompt_Engineering_with_Llama_2_On_Amazon_Bedrock.ipynb
index f873fa96bfb9623a7a4a7afe2608bf8b648611cd..928a46f585539f6d3a784897aa86dea637277db5 100644
--- a/examples/examples_with_aws/Prompt_Engineering_with_Llama_2_On_Amazon_Bedrock.ipynb
+++ b/examples/examples_with_aws/Prompt_Engineering_with_Llama_2_On_Amazon_Bedrock.ipynb
@@ -410,7 +410,9 @@
     "**Prompt Format Example:** `[INST] {prompt_1} [/INST]`\n",
     "\n",
     "#### Why?\n",
-    "In theory, you could use the previous section's roles to instruct the model, for example by using `User:` or `Assistant:`, but for longer conversations it's possible the model responses may forget the role and you may need prompt with the roles again, or the model could begin including the roles in the response. By using the `[INST][/INST]` tags, the model may have more consistent and accurate response over the longer conversations, and you will not run the risk of the tags being included in the response.\n",
+    "In theory, you could use the previous section's roles to instruct the model, for example by using `User:` or `Assistant:`, but for longer conversations it's possible the model responses may forget the role and you may need prompt with the roles again, or the model could begin including the roles in the response. By using the `[INST][/INST]` tags, the model may have more consistent and accurate response over the longer conversations, and you will not run the risk of the tags being included in the response. \n",
+    "\n",
+    "You can read more about using [INST] tags in the [Llama 2 Whitepaper](https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/), in **3.3 System Message for Multi-Turn Consistency**, where you can read about Ghost Attention (GAtt) and the GAtt method used with Llama 2. \n",
     "\n",
     "#### Examples:\n",
     "`[INST]\n",
@@ -1266,7 +1268,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 59,
+   "execution_count": 72,
    "metadata": {},
    "outputs": [
     {