"* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. "
"* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination. "
]
]
},
},
{
"cell_type": "markdown",
"id": "22450267",
"metadata": {},
"source": [
"We start by installing necessary requirements and import packages we will be using in this example.\n",
"- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) a simple Python bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp) library\n",
"- pypdf gives us the ability to work with pdfs\n",
"- sentence-transformers for text embeddings\n",
"- chromadb gives us database capabilities \n",
"- langchain provides necessary RAG tools for this demo"
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 1,
"execution_count": 1,
...
@@ -134,6 +147,14 @@
...
@@ -134,6 +147,14 @@
"from langchain.prompts import PromptTemplate"
"from langchain.prompts import PromptTemplate"
]
]
},
},
{
"cell_type": "markdown",
"id": "73df46d9",
"metadata": {},
"source": [
"Next, initialize the langchain CallBackManager. This handles callbacks from Langchain and for this example we will use token-wise streaming so the answer gets generated token by token when Llama is answering your question."
"Replace `<path-to-llama-gguf-file>` with the path either to your downloaded quantized model file [here](https://drive.google.com/file/d/1afPv3HOy73BE2MoYCgYJvBDeQNa9rZbj/view?usp=sharing), or to the ggml-model-q4_0.gguf file built with the following commands:\n",
"For more info see https://python.langchain.com/docs/integrations/llms/llamacpp"
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": null,
"execution_count": null,
...
@@ -152,7 +192,7 @@
...
@@ -152,7 +192,7 @@
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
"source": [
"source": [
"# create the Llama2 model - for more info see https://python.langchain.com/docs/integrations/llms/llamacpp\n",
"\n",
"llm = LlamaCpp(\n",
"llm = LlamaCpp(\n",
" model_path=\"<path-to-llama-gguf-file>\"\n",
" model_path=\"<path-to-llama-gguf-file>\"\n",
" temperature=0.0,\n",
" temperature=0.0,\n",
...
@@ -163,6 +203,15 @@
...
@@ -163,6 +203,15 @@
")"
")"
]
]
},
},
{
"cell_type": "markdown",
"id": "f2cae215",
"metadata": {},
"source": [
"With the model set up, you are now ready to ask some questions. \n",
"Here is an example of the simplest way to ask the model some general questions."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 5,
"execution_count": 5,
...
@@ -192,11 +241,20 @@
...
@@ -192,11 +241,20 @@
}
}
],
],
"source": [
"source": [
"# the simplest way to ask Llama some general questions\n",
"\n",
"question = \"who wrote the book Innovator's dilemma?\"\n",
"question = \"who wrote the book Innovator's dilemma?\"\n",
"answer = llm(question)"
"answer = llm(question)"
]
]
},
},
{
"cell_type": "markdown",
"id": "545cb6aa",
"metadata": {},
"source": [
"Alternatively, you can sue LangChain's PromptTemplate for some flexibility in your prompts and questions.\n",
"For more information on LangChain's prompt template visit this [link](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/)"
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 6,
"execution_count": 6,
...
@@ -241,6 +299,15 @@
...
@@ -241,6 +299,15 @@
"answer = chain.run(\"innovator's dilemma\")"
"answer = chain.run(\"innovator's dilemma\")"
]
]
},
},
{
"cell_type": "markdown",
"id": "189de613",
"metadata": {},
"source": [
"Now, let's see how Llama2 hallucinates, because it did not have knowledge about Llama2 at the time it was trained. \n",
"By default it behaves like a know-it-all expert who will not say \"I don't know\"."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 7,
"execution_count": 7,
...
@@ -287,8 +354,7 @@
...
@@ -287,8 +354,7 @@
}
}
],
],
"source": [
"source": [
"# let's see how Llama2 hallucinates, because it doesn't have the knowledge about Llama2 while the model was trained, \n",
"\n",
"# but by default it behaves like a know-it-all expert who can't afford to say I don't know\n",
"prompt = PromptTemplate.from_template(\n",
"prompt = PromptTemplate.from_template(\n",
" \"What is {what}?\"\n",
" \"What is {what}?\"\n",
")\n",
")\n",
...
@@ -296,6 +362,15 @@
...
@@ -296,6 +362,15 @@
"answer = chain.run(\"llama2\")"
"answer = chain.run(\"llama2\")"
]
]
},
},
{
"cell_type": "markdown",
"id": "37f77909",
"metadata": {},
"source": [
"One way we can fix the hallucinations is to use RAG, to augment it with more recent or custom data that holds the info for it to answer correctly.\n",
"First we load the Llama2 paper using LangChain's [PDF loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf)"
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 8,
"execution_count": 8,
...
@@ -303,8 +378,7 @@
...
@@ -303,8 +378,7 @@
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
"source": [
"source": [
"# to fix the LLM's hallucination, one way is to use RAG, to augment it with more recent or custom data that holds the info for it to answer correctly\n",
"\n",
"# first load the Llama2 paper via the LangChain's PDF loader\n",
"There are more than 30 vector stores (DBs) supported by LangChain. \n",
"For this example we will use [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) which is light-weight and in memory so it's easy to get started with.\n",
"For other vector stores especially if you need to store a large amount of data - see https://python.langchain.com/docs/integrations/vectorstores\n",
"\n",
"We will also import the HuggingFaceEmbeddings and RecursiveCharacterTextSplitter to assist in storing the documents."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 10,
"execution_count": 10,
...
@@ -340,8 +427,7 @@
...
@@ -340,8 +427,7 @@
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
"source": [
"source": [
"# there're more 30 vector stores (DBs) supported by LangChain. Chroma is light-weight and in memory so it's easy to get started with\n",
"\n",
"# other vector stores can be used to store large amount of data - see https://python.langchain.com/docs/integrations/vectorstores\n",
"from langchain.vectorstores import Chroma\n",
"from langchain.vectorstores import Chroma\n",
"\n",
"\n",
"# embeddings are numerical representations of the question and answer text\n",
"# embeddings are numerical representations of the question and answer text\n",
"To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`HuggingFaceEmbeddings`](https://www.google.com/search?q=langchain+hugging+face+embeddings&sca_esv=572890011&ei=ARUoZaH4LuumptQP48ah2Ac&oq=langchian+hugg&gs_lp=Egxnd3Mtd2l6LXNlcnAiDmxhbmdjaGlhbiBodWdnKgIIADIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCkjeHlC5Cli5D3ABeAGQAQCYAV6gAb4CqgEBNLgBAcgBAPgBAcICChAAGEcY1gQYsAPiAwQYACBBiAYBkAYI&sclient=gws-wiz-serp) to them before storing them into our vector database. \n"
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 11,
"execution_count": 11,
...
@@ -370,6 +465,15 @@
...
@@ -370,6 +465,15 @@
")"
")"
]
]
},
},
{
"cell_type": "markdown",
"id": "bddc38e8",
"metadata": {},
"source": [
"\n",
"We then use ` RetrievalQA` to retrieve the documents from the vector database and give the model more context on Llama 2, thereby increasing its knowledge."
]
},
{
{
"cell_type": "code",
"cell_type": "code",
"execution_count": 12,
"execution_count": 12,
...
@@ -411,14 +515,17 @@
...
@@ -411,14 +515,17 @@
" llm,\n",
" llm,\n",
" retriever=vectordb.as_retriever()\n",
" retriever=vectordb.as_retriever()\n",
")\n",
")\n",
"\n",
"\n"
"# for each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context\n",
]
"# the Llama to answer question about the data stored in the verctor db\n",
},
"question = \"What is llama2?\"\n",
{
"result = qa_chain({\"query\": question})\n",
"cell_type": "markdown",
"# it takes close to 2 minutes to return the result (but using other vector store than Chroma such as FAISS can take longer), because \n",
"id": "db71e5d7",
"# Llama2 is running on a local Mac. To get much faster results, you can use a cloud service with GPU used for inference - see HelloLlamaCloud \n",
"metadata": {},
"# for a demo."
"source": [
"For each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context to the model to answer the question.\n",
"It takes close to 2 minutes to return the result (but using other vector stores other than Chroma such as FAISS can take longer) because Llama2 is running on a local Mac. \n",
"To get much faster results, you can use a cloud service with GPU used for inference - see HelloLlamaCloud for a demo."
* how to run Llama2 locally on a Mac using llama-cpp-python and the llama-cpp's quantized Llama2 model;
* how to run Llama2 locally on a Mac using llama-cpp-python and the llama-cpp's quantized Llama2 model;
* how to use LangChain to ask Llama general questions;
* how to use LangChain to ask Llama general questions;
* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination.
* how to use LangChain to load a recent PDF doc - the Llama2 paper pdf - and ask questions about it. This is the well known RAG (Retrieval Augmented Generation) method to let LLM such as Llama2 be able to answer questions about the data not publicly available when Llama2 was trained, or about your own data. RAG is one way to prevent LLM's hallucination.
%% Cell type:markdown id:22450267 tags:
We start by installing necessary requirements and import packages we will be using in this example.
-[llama-cpp-python](https://github.com/abetlen/llama-cpp-python) a simple Python bindings for [llama.cpp](https://github.com/ggerganov/llama.cpp) library
- pypdf gives us the ability to work with pdfs
- sentence-transformers for text embeddings
- chromadb gives us database capabilities
- langchain provides necessary RAG tools for this demo
Next, initialize the langchain CallBackManager. This handles callbacks from Langchain and for this example we will use token-wise streaming so the answer gets generated token by token when Llama is answering your question.
%% Cell type:code id:01fe5b9c tags:
%% Cell type:code id:01fe5b9c tags:
``` python
``` python
# for token-wise streaming so you'll see the answer gets generated token by token when Llama is answering your question
# for token-wise streaming so you'll see the answer gets generated token by token when Llama is answering your question
Replace `<path-to-llama-gguf-file>` with the path either to your downloaded quantized model file [here](https://drive.google.com/file/d/1afPv3HOy73BE2MoYCgYJvBDeQNa9rZbj/view?usp=sharing), or to the ggml-model-q4_0.gguf file built with the following commands:
# the simplest way to ask Llama some general questions
question="who wrote the book Innovator's dilemma?"
question="who wrote the book Innovator's dilemma?"
answer=llm(question)
answer=llm(question)
```
```
%% Output
%% Output
The book "The Innovator's Dilemma" was written by Clayton Christensen, a professor at Harvard Business School. It was first published in 1997 and has since become a widely influential book on business strategy and innovation.
The book "The Innovator's Dilemma" was written by Clayton Christensen, a professor at Harvard Business School. It was first published in 1997 and has since become a widely influential book on business strategy and innovation.
llama_print_timings: load time = 1202.24 ms
llama_print_timings: load time = 1202.24 ms
llama_print_timings: sample time = 46.44 ms / 58 runs ( 0.80 ms per token, 1249.03 tokens per second)
llama_print_timings: sample time = 46.44 ms / 58 runs ( 0.80 ms per token, 1249.03 tokens per second)
llama_print_timings: prompt eval time = 1815.15 ms / 15 tokens ( 121.01 ms per token, 8.26 tokens per second)
llama_print_timings: prompt eval time = 1815.15 ms / 15 tokens ( 121.01 ms per token, 8.26 tokens per second)
llama_print_timings: eval time = 5582.64 ms / 57 runs ( 97.94 ms per token, 10.21 tokens per second)
llama_print_timings: eval time = 5582.64 ms / 57 runs ( 97.94 ms per token, 10.21 tokens per second)
llama_print_timings: total time = 7545.78 ms
llama_print_timings: total time = 7545.78 ms
%% Cell type:markdown id:545cb6aa tags:
Alternatively, you can sue LangChain's PromptTemplate for some flexibility in your prompts and questions.
For more information on LangChain's prompt template visit this [link](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/)
# a more flexible way to ask Llama general questions using LangChain's PromptTemplate and LLMChain
# a more flexible way to ask Llama general questions using LangChain's PromptTemplate and LLMChain
prompt=PromptTemplate.from_template(
prompt=PromptTemplate.from_template(
"who wrote {book}?"
"who wrote {book}?"
)
)
chain=LLMChain(llm=llm,prompt=prompt)
chain=LLMChain(llm=llm,prompt=prompt)
answer=chain.run("innovator's dilemma")
answer=chain.run("innovator's dilemma")
```
```
%% Output
%% Output
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Clayton Christensen is the author of "The Innovator's Dilemma," which was first published in 1997. The book explores why successful companies often struggle to adapt to disruptive technologies and business models that ultimately lead to their downfall. Christensen argues that these companies are faced with a dilemma because they have invested so heavily in their existing products and processes that it is difficult for them to pivot and embrace new, disruptive technologies. He also introduces the concept of "disruptive innovation," which he defines as a process by which a small company with limited resources is able to successfully challenge established industry leaders.
Clayton Christensen is the author of "The Innovator's Dilemma," which was first published in 1997. The book explores why successful companies often struggle to adapt to disruptive technologies and business models that ultimately lead to their downfall. Christensen argues that these companies are faced with a dilemma because they have invested so heavily in their existing products and processes that it is difficult for them to pivot and embrace new, disruptive technologies. He also introduces the concept of "disruptive innovation," which he defines as a process by which a small company with limited resources is able to successfully challenge established industry leaders.
llama_print_timings: load time = 1202.24 ms
llama_print_timings: load time = 1202.24 ms
llama_print_timings: sample time = 116.69 ms / 147 runs ( 0.79 ms per token, 1259.79 tokens per second)
llama_print_timings: sample time = 116.69 ms / 147 runs ( 0.79 ms per token, 1259.79 tokens per second)
llama_print_timings: prompt eval time = 1180.31 ms / 8 tokens ( 147.54 ms per token, 6.78 tokens per second)
llama_print_timings: prompt eval time = 1180.31 ms / 8 tokens ( 147.54 ms per token, 6.78 tokens per second)
llama_print_timings: eval time = 13192.98 ms / 147 runs ( 89.75 ms per token, 11.14 tokens per second)
llama_print_timings: eval time = 13192.98 ms / 147 runs ( 89.75 ms per token, 11.14 tokens per second)
llama_print_timings: total time = 14746.13 ms
llama_print_timings: total time = 14746.13 ms
%% Cell type:markdown id:189de613 tags:
Now, let's see how Llama2 hallucinates, because it did not have knowledge about Llama2 at the time it was trained.
By default it behaves like a know-it-all expert who will not say "I don't know".
# let's see how Llama2 hallucinates, because it doesn't have the knowledge about Llama2 while the model was trained,
# but by default it behaves like a know-it-all expert who can't afford to say I don't know
prompt=PromptTemplate.from_template(
prompt=PromptTemplate.from_template(
"What is {what}?"
"What is {what}?"
)
)
chain=LLMChain(llm=llm,prompt=prompt)
chain=LLMChain(llm=llm,prompt=prompt)
answer=chain.run("llama2")
answer=chain.run("llama2")
```
```
%% Output
%% Output
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Llama2 is a free, open-source tool for generating high-quality, randomized test data for software applications. It is designed to be easy to use and to produce realistic, diverse test data that can help you identify and fix bugs in your application before it is released.
Llama2 is a free, open-source tool for generating high-quality, randomized test data for software applications. It is designed to be easy to use and to produce realistic, diverse test data that can help you identify and fix bugs in your application before it is released.
Llama2 is the successor to the popular Llama tool, and it includes many new features and improvements over its predecessor. Some of the key features of Llama2 include:
Llama2 is the successor to the popular Llama tool, and it includes many new features and improvements over its predecessor. Some of the key features of Llama2 include:
* Support for a wide range of data types, including strings, numbers, dates, and more
* Support for a wide range of data types, including strings, numbers, dates, and more
* The ability to generate random data based on user-defined rules and constraints
* The ability to generate random data based on user-defined rules and constraints
* A powerful and flexible API that allows you to customize and extend the tool to meet your specific needs
* A powerful and flexible API that allows you to customize and extend the tool to meet your specific needs
* Integration with popular testing frameworks and tools, such as JUnit and TestNG
* Integration with popular testing frameworks and tools, such as JUnit and TestNG
* Support for a variety of programming languages, including Java, Python, C#, and more.
* Support for a variety of programming languages, including Java, Python, C#, and more.
Overall, Llama2 is a powerful and flexible tool that can help you improve the quality and reliability of your software applications by generating realistic and diverse test data.
Overall, Llama2 is a powerful and flexible tool that can help you improve the quality and reliability of your software applications by generating realistic and diverse test data.
llama_print_timings: load time = 1202.24 ms
llama_print_timings: load time = 1202.24 ms
llama_print_timings: sample time = 191.25 ms / 240 runs ( 0.80 ms per token, 1254.87 tokens per second)
llama_print_timings: sample time = 191.25 ms / 240 runs ( 0.80 ms per token, 1254.87 tokens per second)
llama_print_timings: prompt eval time = 480.79 ms / 6 tokens ( 80.13 ms per token, 12.48 tokens per second)
llama_print_timings: prompt eval time = 480.79 ms / 6 tokens ( 80.13 ms per token, 12.48 tokens per second)
llama_print_timings: eval time = 22013.19 ms / 239 runs ( 92.11 ms per token, 10.86 tokens per second)
llama_print_timings: eval time = 22013.19 ms / 239 runs ( 92.11 ms per token, 10.86 tokens per second)
llama_print_timings: total time = 23111.55 ms
llama_print_timings: total time = 23111.55 ms
%% Cell type:markdown id:37f77909 tags:
One way we can fix the hallucinations is to use RAG, to augment it with more recent or custom data that holds the info for it to answer correctly.
First we load the Llama2 paper using LangChain's [PDF loader](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf)
%% Cell type:code id:f3ebc261 tags:
%% Cell type:code id:f3ebc261 tags:
``` python
``` python
# to fix the LLM's hallucination, one way is to use RAG, to augment it with more recent or custom data that holds the info for it to answer correctly
# first load the Llama2 paper via the LangChain's PDF loader
fromlangchain.document_loadersimportPyPDFLoader
fromlangchain.document_loadersimportPyPDFLoader
loader=PyPDFLoader("llama2.pdf")
loader=PyPDFLoader("llama2.pdf")
documents=loader.load()
documents=loader.load()
```
```
%% Cell type:code id:302eaa54 tags:
%% Cell type:code id:302eaa54 tags:
``` python
``` python
# quick check on the loaded document for the correct pages etc
# quick check on the loaded document for the correct pages etc
77 Llama 2 : Open Foundation and Fine-Tuned Chat Models
77 Llama 2 : Open Foundation and Fine-Tuned Chat Models
Hugo Touvron∗Louis Martin†Kevin Stone†
Hugo Touvron∗Louis Martin†Kevin Stone†
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernande
Guillem Cucurull David Esiobu Jude Fernande
%% Cell type:markdown id:8c4ede5b tags:
Next we will store our documents.
There are more than 30 vector stores (DBs) supported by LangChain.
For this example we will use [Chroma](https://python.langchain.com/docs/integrations/vectorstores/chroma) which is light-weight and in memory so it's easy to get started with.
For other vector stores especially if you need to store a large amount of data - see https://python.langchain.com/docs/integrations/vectorstores
We will also import the HuggingFaceEmbeddings and RecursiveCharacterTextSplitter to assist in storing the documents.
%% Cell type:code id:4f94f6f8 tags:
%% Cell type:code id:4f94f6f8 tags:
``` python
``` python
# there're more 30 vector stores (DBs) supported by LangChain. Chroma is light-weight and in memory so it's easy to get started with
# other vector stores can be used to store large amount of data - see https://python.langchain.com/docs/integrations/vectorstores
fromlangchain.vectorstoresimportChroma
fromlangchain.vectorstoresimportChroma
# embeddings are numerical representations of the question and answer text
# embeddings are numerical representations of the question and answer text
To store the documents, we will need to split them into chunks using [`RecursiveCharacterTextSplitter`](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/recursive_text_splitter) and create vector representations of these chunks using [`HuggingFaceEmbeddings`](https://www.google.com/search?q=langchain+hugging+face+embeddings&sca_esv=572890011&ei=ARUoZaH4LuumptQP48ah2Ac&oq=langchian+hugg&gs_lp=Egxnd3Mtd2l6LXNlcnAiDmxhbmdjaGlhbiBodWdnKgIIADIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCjIHEAAYgAQYCkjeHlC5Cli5D3ABeAGQAQCYAV6gAb4CqgEBNLgBAcgBAPgBAcICChAAGEcY1gQYsAPiAwQYACBBiAYBkAYI&sclient=gws-wiz-serp) to them before storing them into our vector database.
# create the vector db to store all the split chunks as embeddings
# create the vector db to store all the split chunks as embeddings
embeddings=HuggingFaceEmbeddings()
embeddings=HuggingFaceEmbeddings()
vectordb=Chroma.from_documents(
vectordb=Chroma.from_documents(
documents=all_splits,
documents=all_splits,
embedding=embeddings,
embedding=embeddings,
)
)
```
```
%% Cell type:markdown id:bddc38e8 tags:
We then use ` RetrievalQA` to retrieve the documents from the vector database and give the model more context on Llama 2, thereby increasing its knowledge.
%% Cell type:code id:1a2472c9 tags:
%% Cell type:code id:1a2472c9 tags:
``` python
``` python
# use another LangChain's chain, RetrievalQA, to associate Llama with the loaded documents stored in the vector db
# use another LangChain's chain, RetrievalQA, to associate Llama with the loaded documents stored in the vector db
fromlangchain.chainsimportRetrievalQA
fromlangchain.chainsimportRetrievalQA
qa_chain=RetrievalQA.from_chain_type(
qa_chain=RetrievalQA.from_chain_type(
llm,
llm,
retriever=vectordb.as_retriever()
retriever=vectordb.as_retriever()
)
)
# for each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context
# the Llama to answer question about the data stored in the verctor db
question="What is llama2?"
result=qa_chain({"query":question})
# it takes close to 2 minutes to return the result (but using other vector store than Chroma such as FAISS can take longer), because
# Llama2 is running on a local Mac. To get much faster results, you can use a cloud service with GPU used for inference - see HelloLlamaCloud
# for a demo.
```
```
%% Output
%% Output
Llama.generate: prefix-match hit
Llama.generate: prefix-match hit
Llama 2 is a new language model developed by Meta AI that has been released openly to encourage responsible AI innovation. It is a fine-tuned version of the original Llama model and is optimized for dialogue use cases. The model has not covered all scenarios and may produce inaccurate or objectionable responses, so developers should perform safety testing and tuning before deploying any applications of Llama 2.
Llama 2 is a new language model developed by Meta AI that has been released openly to encourage responsible AI innovation. It is a fine-tuned version of the original Llama model and is optimized for dialogue use cases. The model has not covered all scenarios and may produce inaccurate or objectionable responses, so developers should perform safety testing and tuning before deploying any applications of Llama 2.
llama_print_timings: load time = 1202.24 ms
llama_print_timings: load time = 1202.24 ms
llama_print_timings: sample time = 76.83 ms / 97 runs ( 0.79 ms per token, 1262.48 tokens per second)
llama_print_timings: sample time = 76.83 ms / 97 runs ( 0.79 ms per token, 1262.48 tokens per second)
llama_print_timings: prompt eval time = 97067.98 ms / 1146 tokens ( 84.70 ms per token, 11.81 tokens per second)
llama_print_timings: prompt eval time = 97067.98 ms / 1146 tokens ( 84.70 ms per token, 11.81 tokens per second)
llama_print_timings: eval time = 10431.81 ms / 96 runs ( 108.66 ms per token, 9.20 tokens per second)
llama_print_timings: eval time = 10431.81 ms / 96 runs ( 108.66 ms per token, 9.20 tokens per second)
llama_print_timings: total time = 107897.31 ms
llama_print_timings: total time = 107897.31 ms
%% Cell type:markdown id:db71e5d7 tags:
For each question, LangChain performs a semantic similarity search of it in the vector db, then passes the search results as the context to the model to answer the question.
It takes close to 2 minutes to return the result (but using other vector stores other than Chroma such as FAISS can take longer) because Llama2 is running on a local Mac.
To get much faster results, you can use a cloud service with GPU used for inference - see HelloLlamaCloud for a demo.