From ce10080d5ae558eeaf1c5f03c38a99eb75b2874e Mon Sep 17 00:00:00 2001
From: sekyonda <127536312+sekyondaMeta@users.noreply.github.com>
Date: Tue, 17 Oct 2023 11:34:47 -0400
Subject: [PATCH] Update HelloLlamaCloud.ipynb

---
 demo_apps/HelloLlamaCloud.ipynb | 182 +++++++-------------------------
 1 file changed, 39 insertions(+), 143 deletions(-)

diff --git a/demo_apps/HelloLlamaCloud.ipynb b/demo_apps/HelloLlamaCloud.ipynb
index 7d13cd6c..dac62212 100644
--- a/demo_apps/HelloLlamaCloud.ipynb
+++ b/demo_apps/HelloLlamaCloud.ipynb
@@ -20,7 +20,7 @@
    "id": "61dde626",
    "metadata": {},
    "source": [
-    "We start by installing the necessary packages:\n",
+    "Let's start by installing the necessary packages:\n",
     "- sentence-transformers for text embeddings\n",
     "- chromadb gives us database capabilities \n",
     "- langchain provides necessary RAG tools for this demo\n",
@@ -40,18 +40,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 2,
+   "execution_count": null,
    "id": "b9c5546a",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " ········\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "from getpass import getpass\n",
     "import os\n",
@@ -66,7 +58,8 @@
    "metadata": {},
    "source": [
     "Next we call the Llama 2 model from replicate. In this example we will use the llama 2 13b chat model. You can find more Llama 2 models by searching for them on the [Replicate model explore page](https://replicate.com/explore?query=llama).\n",
-    "You can add them here in the format: model_name/version"
+    "\n",
+    "You can add them here in the format: model_name/version\n"
    ]
   },
   {
@@ -95,18 +88,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 4,
+   "execution_count": null,
    "id": "493a7148",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Hello! I'd be happy to help you with your question. The book \"The Innovator's Dilemma\" was written by Clayton Christensen, an American author and professor at Harvard Business School. It was first published in 1997 and has since become a widely influential work on innovation and business strategy. Do you have any other questions or would you like more information on this topic?\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "question = \"who wrote the book Innovator's dilemma?\"\n",
     "answer = llm(question)\n",
@@ -119,23 +104,16 @@
    "metadata": {},
    "source": [
     "We will then try to follow up the response with a question asking for more information on the book. \n",
-    "Since the chat history not passed on Llama doesn't have the context and doesn't know this is more about the book thus it treats this as new query."
+    "\n",
+    "Since the chat history is not passed on Llama doesn't have the context and doesn't know this is more about the book thus it treats this as new query.\n"
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 5,
+   "execution_count": null,
    "id": "9b5c8676",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Hello! I'm here to assist you with any questions or tasks you may have. I can provide information on a wide range of topics, from science and history to entertainment and culture. I can also help with practical tasks such as converting units of measurement or calculating dates and times. Is there something specific you would like to know or discuss?\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# chat history not passed so Llama doesn't have the context and doesn't know this is more about the book\n",
     "followup = \"tell me more\"\n",
@@ -149,12 +127,13 @@
    "metadata": {},
    "source": [
     "To get around this we will need to provide the model with history of the chat. \n",
+    "\n",
     "To do this, we will use  [`ConversationBufferMemory`](https://python.langchain.com/docs/modules/memory/types/buffer) to pass the chat history to the model and give it the capability to handle follow up questions."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 6,
+   "execution_count": null,
    "id": "5428ca27",
    "metadata": {},
    "outputs": [],
@@ -177,25 +156,16 @@
    "metadata": {},
    "source": [
     "Once this is set up, let us repeat the steps from before and ask the model a simple question.\n",
+    "\n",
     "Then we pass the question and answer back into the model for context along with the follow up question."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 7,
+   "execution_count": null,
    "id": "baee2d22",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Ah, you're asking about \"The Innovator's Dilemma,\" that classic book by Clayton Christensen! He's a renowned author and professor at Harvard Business School, known for his work on disruptive innovation and how established companies can struggle to adapt to new technologies and business models.\n",
-      "\n",
-      "In fact, I have access to a wealth of information on this topic, as well as other areas of expertise. Would you like me to share some interesting facts or insights about Clayton Christensen or his book? For example, did you know that he coined the term\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# restart from the original question\n",
     "answer = conversation.predict(input=question)\n",
@@ -204,21 +174,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 8,
+   "execution_count": null,
    "id": "9c7d67a8",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Sure thing! Here are some additional details about Clayton Christensen and his book \"The Innovator's Dilemma\":\n",
-      "\n",
-      "1. The book was first published in 1997 and has since become a seminal work in the field of innovation and entrepreneurship.\n",
-      "2. Christensen's central argument is that successful companies often struggle to adopt new technologies and business models because they are too focused on sustaining their existing businesses. This can lead to a \"dilemma\" where these companies fail to innovate and eventually lose market share to newer, more ag\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# pass context (previous question and answer) along with the follow up \"tell me more\" to Llama who now knows more of what\n",
     "memory.save_context({\"input\": question},\n",
@@ -241,7 +200,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 9,
+   "execution_count": null,
    "id": "f5303d75",
    "metadata": {},
    "outputs": [],
@@ -254,22 +213,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 10,
+   "execution_count": null,
    "id": "678c2b4a",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      "77 Llama 2 : Open Foundation and Fine-Tuned Chat Models\n",
-      "Hugo Touvron∗Louis Martin†Kevin Stone†\n",
-      "Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra\n",
-      "Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen\n",
-      "Guillem Cucurull David Esiobu Jude Fernande\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# check docs length and content\n",
     "print(len(docs), docs[0].page_content[0:300])"
@@ -289,7 +236,7 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 11,
+   "execution_count": null,
    "id": "eecb6a34",
    "metadata": {},
    "outputs": [],
@@ -309,14 +256,14 @@
    "id": "36d4a17c",
    "metadata": {},
    "source": [
-    "To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`HuggingFaceEmbeddings`](https://www.google.com/search?q=langchain+hugging+face+embeddings&sca_esv=572890011&ei=ARUoZaH4LuumptQP48ah2Ac&oq=langchian+hugg&gs_lp=Egxnd3Mtd2l6LXNlcnAiDmxhbmdjaGlhbiBodWdnKgIIADIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCkjeHlC5Cli5D3ABeAGQAQCYAV6gAb4CqgEBNLgBAcgBAPgBAcICChAAGEcY1gQYsAPiAwQYACBBiAYBkAYI&sclient=gws-wiz-serp) to them before storing them into our vector database. \n",
+    "To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`HuggingFaceEmbeddings`](https://www.google.com/search?q=langchain+hugging+face+embeddings&sca_esv=572890011&ei=ARUoZaH4LuumptQP48ah2Ac&oq=langchian+hugg&gs_lp=Egxnd3Mtd2l6LXNlcnAiDmxhbmdjaGlhbiBodWdnKgIIADIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCkjeHlC5Cli5D3ABeAGQAQCYAV6gAb4CqgEBNLgBAcgBAPgBAcICChAAGEcY1gQYsAPiAwQYACBBiAYBkAYI&sclient=gws-wiz-serp) on them before storing them into our vector database. \n",
     "\n",
     "In general, you should use larger chuck sizes for highly structured text such as code and smaller size for less structured text. You may need to experiment with different chunk sizes and overlap values to find out the best numbers."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 12,
+   "execution_count": null,
    "id": "bc65e161",
    "metadata": {},
    "outputs": [],
@@ -338,23 +285,16 @@
    "metadata": {},
    "source": [
     "We then use ` RetrievalQA` to retrieve the documents from the vector database and give the model more context on Llama 2, thereby increasing its knowledge.\n",
+    "\n",
     "For each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context to Llama to answer the question."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 13,
+   "execution_count": null,
    "id": "00e3f72b",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Based on the provided text, Llama2 appears to be a language model developed by Meta AI that is designed for dialogue use cases. It is a fine-tuned version of the original Llama model, with improved performance and safety features. The model has been trained on a large dataset of text and has undergone testing in English, but it may not cover all scenarios or produce accurate responses in certain instances. As such, developers are advised to perform safety testing and tuning before deploying any applications of Llama2. Additionally, the model is released under a responsible use guide and code\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# use LangChain's RetrievalQA, to associate Llama with the loaded documents stored in the vector db\n",
     "from langchain.chains import RetrievalQA\n",
@@ -376,30 +316,17 @@
    "metadata": {},
    "source": [
     "Now, lets bring it all together by incorporating follow up questions.\n",
+    "\n",
     "First we ask a follow up questions without giving the model context of the previous conversation. \n",
     "Without this context, the answer we get does not relate to our original question."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 14,
+   "execution_count": null,
    "id": "53f27473",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Based on the context provided, I don't see any explicit mention of \"its\" use cases. However, I can provide some possible inferences based on the information given:\n",
-      "\n",
-      "The text mentions a partnerships team and product and technical organization support, which suggests that the tool or approach being referred to is likely related to product development or customer support.\n",
-      "\n",
-      "The emphasis on prioritizing harmlessness over informativeness and helpfulness suggests that the tool may be used for moderation or content review purposes, where the goal is to avoid causing harm or offense while still providing useful information.\n",
-      "\n",
-      "The\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# no context passed so Llama2 doesn't have enough context to answer so it lets its imagination go wild\n",
     "result = qa_chain({\"query\": \"what are its use cases?\"})\n",
@@ -411,12 +338,12 @@
    "id": "833221c0",
    "metadata": {},
    "source": [
-    "As we did before, let us use the ConversationalRetrievalChain package to give the model context of our previous question so we can add follow up questions."
+    "As we did before, let us use the `ConversationalRetrievalChain` package to give the model context of our previous question so we can add follow up questions."
    ]
   },
   {
    "cell_type": "code",
-   "execution_count": 15,
+   "execution_count": null,
    "id": "743644a1",
    "metadata": {},
    "outputs": [],
@@ -428,18 +355,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 16,
+   "execution_count": null,
    "id": "7c3d1142",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Based on the provided text, Llama2 appears to be a language model developed by Meta AI that is designed for dialogue use cases. It is a fine-tuned version of the original Llama model, with improved performance and safety features. The model has been trained on a large dataset of text and has undergone testing in English, but it may not cover all scenarios or produce accurate responses in certain instances. As such, developers are advised to perform safety testing and tuning before deploying any applications of Llama2. Additionally, the model is released under a responsible use guide and code\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# let's ask the original question \"What is llama2?\" again\n",
     "result = chat_chain({\"question\": question, \"chat_history\": []})\n",
@@ -448,22 +367,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 17,
+   "execution_count": null,
    "id": "4b17f08f",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Based on the provided context, here are some potential use cases for Llama2, a language model developed by Meta AI for dialogue use cases:\n",
-      "\n",
-      "1. Assistant-like chat: Tuned models of Llama2 can be used for assistant-like chat applications, such as customer service or personal assistants.\n",
-      "2. Natural language generation tasks: Pretrained models of Llama2 can be adapted for various natural language generation tasks, such as text summarization, machine translation, and content creation.\n",
-      "3. Research use cases: Llama2 can be used in research studies to\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# this time we pass chat history along with the follow up so good things should happen\n",
     "chat_history = [(question, result[\"answer\"])]\n",
@@ -478,6 +385,7 @@
    "metadata": {},
    "source": [
     "Further follow ups can be made possible by updating chat_history.\n",
+    "\n",
     "Note that results can get cut off. You may set \"max_new_tokens\" in the Replicate call above to a larger number (like shown below) to avoid the cut off.\n",
     "\n",
     "```python\n",
@@ -487,22 +395,10 @@
   },
   {
    "cell_type": "code",
-   "execution_count": 18,
+   "execution_count": null,
    "id": "95d22347",
    "metadata": {},
-   "outputs": [
-    {
-     "name": "stdout",
-     "output_type": "stream",
-     "text": [
-      " Based on the information provided, Llama2 can assist with various natural language generation tasks, particularly in English. The model has been fine-tuned for assistant-like chat and has shown proficiency in other languages as well, although its proficiency is limited due to the limited amount of pretraining data available in non-English languages.\n",
-      "\n",
-      "Specifically, Llama2 can be used for tasks such as:\n",
-      "\n",
-      "1. Dialogue systems: Llama2 can be fine-tuned for different dialogue systems, such as customer service chatbots, virtual assistants,\n"
-     ]
-    }
-   ],
+   "outputs": [],
    "source": [
     "# further follow ups can be made possible by updating chat_history like this:\n",
     "chat_history.append((followup, followup_answer[\"answer\"]))\n",
-- 
GitLab