Skip to content
Snippets Groups Projects
Unverified Commit 3e699520 authored by Igor Kasianenko's avatar Igor Kasianenko Committed by GitHub
Browse files

Typo in Prompt_Engineering_with_Llama_3.ipynb (#806)

parents a346e19d 4efe4390
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
<a href="https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://colab.research.google.com/github/meta-llama/llama-recipes/blob/main/recipes/quickstart/Prompt_Engineering_with_Llama_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Prompt Engineering with Llama 3.1 # Prompt Engineering with Llama 3.1
Prompt engineering is using natural language to produce a desired response from a large language model (LLM). Prompt engineering is using natural language to produce a desired response from a large language model (LLM).
This interactive guide covers prompt engineering & best practices with Llama 3.1. This interactive guide covers prompt engineering & best practices with Llama 3.1.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Introduction ## Introduction
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Why now? ### Why now?
[Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762) introduced the world to transformer neural networks (originally for machine translation). Transformers ushered an era of generative AI with diffusion models for image creation and large language models (`LLMs`) as **programmable deep learning networks**. [Vaswani et al. (2017)](https://arxiv.org/abs/1706.03762) introduced the world to transformer neural networks (originally for machine translation). Transformers ushered an era of generative AI with diffusion models for image creation and large language models (`LLMs`) as **programmable deep learning networks**.
Programming foundational LLMs is done with natural language – it doesn't require training/tuning like ML models of the past. This has opened the door to a massive amount of innovation and a paradigm shift in how technology can be deployed. The science/art of using natural language to program language models to accomplish a task is referred to as **Prompt Engineering**. Programming foundational LLMs is done with natural language – it doesn't require training/tuning like ML models of the past. This has opened the door to a massive amount of innovation and a paradigm shift in how technology can be deployed. The science/art of using natural language to program language models to accomplish a task is referred to as **Prompt Engineering**.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Llama Models ### Llama Models
In 2023, Meta introduced the [Llama language models](https://ai.meta.com/llama/) (Llama Chat, Code Llama, Llama Guard). These are general purpose, state-of-the-art LLMs. In 2023, Meta introduced the [Llama language models](https://ai.meta.com/llama/) (Llama Chat, Code Llama, Llama Guard). These are general purpose, state-of-the-art LLMs.
Llama models come in varying parameter sizes. The smaller models are cheaper to deploy and run; the larger models are more capable. Llama models come in varying parameter sizes. The smaller models are cheaper to deploy and run; the larger models are more capable.
#### Llama 3.1 #### Llama 3.1
1. `llama-3.1-8b` - base pretrained 8 billion parameter model 1. `llama-3.1-8b` - base pretrained 8 billion parameter model
1. `llama-3.1-70b` - base pretrained 70 billion parameter model 1. `llama-3.1-70b` - base pretrained 70 billion parameter model
1. `llama-3.1-405b` - base pretrained 405 billion parameter model 1. `llama-3.1-405b` - base pretrained 405 billion parameter model
1. `llama-3.1-8b-instruct` - instruction fine-tuned 8 billion parameter model 1. `llama-3.1-8b-instruct` - instruction fine-tuned 8 billion parameter model
1. `llama-3.1-70b-instruct` - instruction fine-tuned 70 billion parameter model 1. `llama-3.1-70b-instruct` - instruction fine-tuned 70 billion parameter model
1. `llama-3.1-405b-instruct` - instruction fine-tuned 405 billion parameter model (flagship) 1. `llama-3.1-405b-instruct` - instruction fine-tuned 405 billion parameter model (flagship)
#### Llama 3 #### Llama 3
1. `llama-3-8b` - base pretrained 8 billion parameter model 1. `llama-3-8b` - base pretrained 8 billion parameter model
1. `llama-3-70b` - base pretrained 70 billion parameter model 1. `llama-3-70b` - base pretrained 70 billion parameter model
1. `llama-3-8b-instruct` - instruction fine-tuned 8 billion parameter model 1. `llama-3-8b-instruct` - instruction fine-tuned 8 billion parameter model
1. `llama-3-70b-instruct` - instruction fine-tuned 70 billion parameter model (flagship) 1. `llama-3-70b-instruct` - instruction fine-tuned 70 billion parameter model (flagship)
#### Llama 2 #### Llama 2
1. `llama-2-7b` - base pretrained 7 billion parameter model 1. `llama-2-7b` - base pretrained 7 billion parameter model
1. `llama-2-13b` - base pretrained 13 billion parameter model 1. `llama-2-13b` - base pretrained 13 billion parameter model
1. `llama-2-70b` - base pretrained 70 billion parameter model 1. `llama-2-70b` - base pretrained 70 billion parameter model
1. `llama-2-7b-chat` - chat fine-tuned 7 billion parameter model 1. `llama-2-7b-chat` - chat fine-tuned 7 billion parameter model
1. `llama-2-13b-chat` - chat fine-tuned 13 billion parameter model 1. `llama-2-13b-chat` - chat fine-tuned 13 billion parameter model
1. `llama-2-70b-chat` - chat fine-tuned 70 billion parameter model (flagship) 1. `llama-2-70b-chat` - chat fine-tuned 70 billion parameter model (flagship)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Code Llama is a code-focused LLM built on top of Llama 2 also available in various sizes and finetunes: Code Llama is a code-focused LLM built on top of Llama 2 also available in various sizes and finetunes:
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Code Llama #### Code Llama
1. `codellama-7b` - code fine-tuned 7 billion parameter model 1. `codellama-7b` - code fine-tuned 7 billion parameter model
1. `codellama-13b` - code fine-tuned 13 billion parameter model 1. `codellama-13b` - code fine-tuned 13 billion parameter model
1. `codellama-34b` - code fine-tuned 34 billion parameter model 1. `codellama-34b` - code fine-tuned 34 billion parameter model
1. `codellama-70b` - code fine-tuned 70 billion parameter model 1. `codellama-70b` - code fine-tuned 70 billion parameter model
1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model 1. `codellama-7b-instruct` - code & instruct fine-tuned 7 billion parameter model
2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model 2. `codellama-13b-instruct` - code & instruct fine-tuned 13 billion parameter model
3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model 3. `codellama-34b-instruct` - code & instruct fine-tuned 34 billion parameter model
3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model 3. `codellama-70b-instruct` - code & instruct fine-tuned 70 billion parameter model
1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model 1. `codellama-7b-python` - Python fine-tuned 7 billion parameter model
2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model 2. `codellama-13b-python` - Python fine-tuned 13 billion parameter model
3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model 3. `codellama-34b-python` - Python fine-tuned 34 billion parameter model
3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model 3. `codellama-70b-python` - Python fine-tuned 70 billion parameter model
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Getting an LLM ## Getting an LLM
Large language models are deployed and accessed in a variety of ways, including: Large language models are deployed and accessed in a variety of ways, including:
1. **Self-hosting**: Using local hardware to run inference. Ex. running Llama on your Macbook Pro using [llama.cpp](https://github.com/ggerganov/llama.cpp). 1. **Self-hosting**: Using local hardware to run inference. Ex. running Llama on your Macbook Pro using [llama.cpp](https://github.com/ggerganov/llama.cpp).
* Best for privacy/security or if you already have a GPU. * Best for privacy/security or if you already have a GPU.
1. **Cloud hosting**: Using a cloud provider to deploy an instance that hosts a specific model. Ex. running Llama on cloud providers like AWS, Azure, GCP, and others. 1. **Cloud hosting**: Using a cloud provider to deploy an instance that hosts a specific model. Ex. running Llama on cloud providers like AWS, Azure, GCP, and others.
* Best for customizing models and their runtime (ex. fine-tuning a model for your use case). * Best for customizing models and their runtime (ex. fine-tuning a model for your use case).
1. **Hosted API**: Call LLMs directly via an API. There are many companies that provide Llama inference APIs including AWS Bedrock, Replicate, Anyscale, Together and others. 1. **Hosted API**: Call LLMs directly via an API. There are many companies that provide Llama inference APIs including AWS Bedrock, Replicate, Anyscale, Together and others.
* Easiest option overall. * Easiest option overall.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Hosted APIs ### Hosted APIs
Hosted APIs are the easiest way to get started. We'll use them here. There are usually two main endpoints: Hosted APIs are the easiest way to get started. We'll use them here. There are usually two main endpoints:
1. **`completion`**: generate a response to a given prompt (a string). 1. **`completion`**: generate a response to a given prompt (a string).
1. **`chat_completion`**: generate the next message in a list of messages, enabling more explicit instruction and context for use cases like chatbots. 1. **`chat_completion`**: generate the next message in a list of messages, enabling more explicit instruction and context for use cases like chatbots.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Tokens ## Tokens
LLMs process inputs and outputs in chunks called *tokens*. Think of these, roughly, as words – each model will have its own tokenization scheme. For example, this sentence... LLMs process inputs and outputs in chunks called *tokens*. Think of these, roughly, as words – each model will have its own tokenization scheme. For example, this sentence...
> Our destiny is written in the stars. > Our destiny is written in the stars.
...is tokenized into `["Our", " destiny", " is", " written", " in", " the", " stars", "."]` for Llama 3. See [this](https://tiktokenizer.vercel.app/?model=meta-llama%2FMeta-Llama-3-8B) for an interactive tokenizer tool. ...is tokenized into `["Our", " destiny", " is", " written", " in", " the", " stars", "."]` for Llama 3. See [this](https://tiktokenizer.vercel.app/?model=meta-llama%2FMeta-Llama-3-8B) for an interactive tokenizer tool.
Tokens matter most when you consider API pricing and internal behavior (ex. hyperparameters). Tokens matter most when you consider API pricing and internal behavior (ex. hyperparameters).
Each model has a maximum context length that your prompt cannot exceed. That's 128k tokens for Llama 3.1, 4K for Llama 2, and 100K for Code Llama. Each model has a maximum context length that your prompt cannot exceed. That's 128k tokens for Llama 3.1, 4K for Llama 2, and 100K for Code Llama.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Notebook Setup ## Notebook Setup
The following APIs will be used to call LLMs throughout the guide. As an example, we'll call Llama 3.1 chat using [Grok](https://console.groq.com/playground?model=llama3-70b-8192). The following APIs will be used to call LLMs throughout the guide. As an example, we'll call Llama 3.1 chat using [Groq](https://console.groq.com/playground?model=llama3-70b-8192).
To install prerequisites run: To install prerequisites run:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import sys import sys
!{sys.executable} -m pip install groq !{sys.executable} -m pip install groq
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import os import os
from typing import Dict, List from typing import Dict, List
from groq import Groq from groq import Groq
# Get a free API key from https://console.groq.com/keys # Get a free API key from https://console.groq.com/keys
os.environ["GROQ_API_KEY"] = "YOUR_GROQ_API_KEY" os.environ["GROQ_API_KEY"] = "YOUR_GROQ_API_KEY"
LLAMA3_405B_INSTRUCT = "llama-3.1-405b-reasoning" # Note: Groq currently only gives access here to paying customers for 405B model LLAMA3_405B_INSTRUCT = "llama-3.1-405b-reasoning" # Note: Groq currently only gives access here to paying customers for 405B model
LLAMA3_70B_INSTRUCT = "llama-3.1-70b-versatile" LLAMA3_70B_INSTRUCT = "llama-3.1-70b-versatile"
LLAMA3_8B_INSTRUCT = "llama3.1-8b-instant" LLAMA3_8B_INSTRUCT = "llama3.1-8b-instant"
DEFAULT_MODEL = LLAMA3_70B_INSTRUCT DEFAULT_MODEL = LLAMA3_70B_INSTRUCT
client = Groq() client = Groq()
def assistant(content: str): def assistant(content: str):
return { "role": "assistant", "content": content } return { "role": "assistant", "content": content }
def user(content: str): def user(content: str):
return { "role": "user", "content": content } return { "role": "user", "content": content }
def chat_completion( def chat_completion(
messages: List[Dict], messages: List[Dict],
model = DEFAULT_MODEL, model = DEFAULT_MODEL,
temperature: float = 0.6, temperature: float = 0.6,
top_p: float = 0.9, top_p: float = 0.9,
) -> str: ) -> str:
response = client.chat.completions.create( response = client.chat.completions.create(
messages=messages, messages=messages,
model=model, model=model,
temperature=temperature, temperature=temperature,
top_p=top_p, top_p=top_p,
) )
return response.choices[0].message.content return response.choices[0].message.content
def completion( def completion(
prompt: str, prompt: str,
model: str = DEFAULT_MODEL, model: str = DEFAULT_MODEL,
temperature: float = 0.6, temperature: float = 0.6,
top_p: float = 0.9, top_p: float = 0.9,
) -> str: ) -> str:
return chat_completion( return chat_completion(
[user(prompt)], [user(prompt)],
model=model, model=model,
temperature=temperature, temperature=temperature,
top_p=top_p, top_p=top_p,
) )
def complete_and_print(prompt: str, model: str = DEFAULT_MODEL): def complete_and_print(prompt: str, model: str = DEFAULT_MODEL):
print(f'==============\n{prompt}\n==============') print(f'==============\n{prompt}\n==============')
response = completion(prompt, model) response = completion(prompt, model)
print(response, end='\n\n') print(response, end='\n\n')
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Completion APIs ### Completion APIs
Let's try Llama 3.1! Let's try Llama 3.1!
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("The typical color of the sky is: ") complete_and_print("The typical color of the sky is: ")
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("which model version are you?") complete_and_print("which model version are you?")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Chat Completion APIs ### Chat Completion APIs
Chat completion models provide additional structure to interacting with an LLM. An array of structured message objects is sent to the LLM instead of a single piece of text. This message list provides the LLM with some "context" or "history" from which to continue. Chat completion models provide additional structure to interacting with an LLM. An array of structured message objects is sent to the LLM instead of a single piece of text. This message list provides the LLM with some "context" or "history" from which to continue.
Typically, each message contains `role` and `content`: Typically, each message contains `role` and `content`:
* Messages with the `system` role are used to provide core instruction to the LLM by developers. * Messages with the `system` role are used to provide core instruction to the LLM by developers.
* Messages with the `user` role are typically human-provided messages. * Messages with the `user` role are typically human-provided messages.
* Messages with the `assistant` role are typically generated by the LLM. * Messages with the `assistant` role are typically generated by the LLM.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
response = chat_completion(messages=[ response = chat_completion(messages=[
user("My favorite color is blue."), user("My favorite color is blue."),
assistant("That's great to hear!"), assistant("That's great to hear!"),
user("What is my favorite color?"), user("What is my favorite color?"),
]) ])
print(response) print(response)
# "Sure, I can help you with that! Your favorite color is blue." # "Sure, I can help you with that! Your favorite color is blue."
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### LLM Hyperparameters ### LLM Hyperparameters
#### `temperature` & `top_p` #### `temperature` & `top_p`
These APIs also take parameters which influence the creativity and determinism of your output. These APIs also take parameters which influence the creativity and determinism of your output.
At each step, LLMs generate a list of most likely tokens and their respective probabilities. The least likely tokens are "cut" from the list (based on `top_p`), and then a token is randomly selected from the remaining candidates (`temperature`). At each step, LLMs generate a list of most likely tokens and their respective probabilities. The least likely tokens are "cut" from the list (based on `top_p`), and then a token is randomly selected from the remaining candidates (`temperature`).
In other words: `top_p` controls the breadth of vocabulary in a generation and `temperature` controls the randomness within that vocabulary. A temperature of ~0 produces *almost* deterministic results. In other words: `top_p` controls the breadth of vocabulary in a generation and `temperature` controls the randomness within that vocabulary. A temperature of ~0 produces *almost* deterministic results.
[Read more about temperature setting here](https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683). [Read more about temperature setting here](https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api-a-few-tips-and-tricks-on-controlling-the-creativity-deterministic-output-of-prompt-responses/172683).
Let's try it out: Let's try it out:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def print_tuned_completion(temperature: float, top_p: float): def print_tuned_completion(temperature: float, top_p: float):
response = completion("Write a haiku about llamas", temperature=temperature, top_p=top_p) response = completion("Write a haiku about llamas", temperature=temperature, top_p=top_p)
print(f'[temperature: {temperature} | top_p: {top_p}]\n{response.strip()}\n') print(f'[temperature: {temperature} | top_p: {top_p}]\n{response.strip()}\n')
print_tuned_completion(0.01, 0.01) print_tuned_completion(0.01, 0.01)
print_tuned_completion(0.01, 0.01) print_tuned_completion(0.01, 0.01)
# These two generations are highly likely to be the same # These two generations are highly likely to be the same
print_tuned_completion(1.0, 1.0) print_tuned_completion(1.0, 1.0)
print_tuned_completion(1.0, 1.0) print_tuned_completion(1.0, 1.0)
# These two generations are highly likely to be different # These two generations are highly likely to be different
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Prompting Techniques ## Prompting Techniques
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Explicit Instructions ### Explicit Instructions
Detailed, explicit instructions produce better results than open-ended prompts: Detailed, explicit instructions produce better results than open-ended prompts:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print(prompt="Describe quantum physics in one short sentence of no more than 12 words") complete_and_print(prompt="Describe quantum physics in one short sentence of no more than 12 words")
# Returns a succinct explanation of quantum physics that mentions particles and states existing simultaneously. # Returns a succinct explanation of quantum physics that mentions particles and states existing simultaneously.
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
You can think about giving explicit instructions as using rules and restrictions to how Llama 3 responds to your prompt. You can think about giving explicit instructions as using rules and restrictions to how Llama 3 responds to your prompt.
- Stylization - Stylization
- `Explain this to me like a topic on a children's educational network show teaching elementary students.` - `Explain this to me like a topic on a children's educational network show teaching elementary students.`
- `I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words:` - `I'm a software engineer using large language models for summarization. Summarize the following text in under 250 words:`
- `Give your answer like an old timey private investigator hunting down a case step by step.` - `Give your answer like an old timey private investigator hunting down a case step by step.`
- Formatting - Formatting
- `Use bullet points.` - `Use bullet points.`
- `Return as a JSON object.` - `Return as a JSON object.`
- `Use less technical terms and help me apply it in my work in communications.` - `Use less technical terms and help me apply it in my work in communications.`
- Restrictions - Restrictions
- `Only use academic papers.` - `Only use academic papers.`
- `Never give sources older than 2020.` - `Never give sources older than 2020.`
- `If you don't know the answer, say that you don't know.` - `If you don't know the answer, say that you don't know.`
Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources. Here's an example of giving explicit instructions to give more specific results by limiting the responses to recently created sources.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("Explain the latest advances in large language models to me.") complete_and_print("Explain the latest advances in large language models to me.")
# More likely to cite sources from 2017 # More likely to cite sources from 2017
complete_and_print("Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020.") complete_and_print("Explain the latest advances in large language models to me. Always cite your sources. Never cite sources older than 2020.")
# Gives more specific advances and only cites sources from 2020 # Gives more specific advances and only cites sources from 2020
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Example Prompting using Zero- and Few-Shot Learning ### Example Prompting using Zero- and Few-Shot Learning
A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image ([Fei-Fei et al. (2006)](http://vision.stanford.edu/documents/Fei-FeiFergusPerona2006.pdf)). A shot is an example or demonstration of what type of prompt and response you expect from a large language model. This term originates from training computer vision models on photographs, where one shot was one example or instance that the model used to classify an image ([Fei-Fei et al. (2006)](http://vision.stanford.edu/documents/Fei-FeiFergusPerona2006.pdf)).
#### Zero-Shot Prompting #### Zero-Shot Prompting
Large language models like Llama 3 are unique because they are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting". Large language models like Llama 3 are unique because they are capable of following instructions and producing responses without having previously seen an example of a task. Prompting without examples is called "zero-shot prompting".
Let's try using Llama 3 as a sentiment detector. You may notice that output format varies - we can improve this with better prompting. Let's try using Llama 3 as a sentiment detector. You may notice that output format varies - we can improve this with better prompting.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("Text: This was the best movie I've ever seen! \n The sentiment of the text is: ") complete_and_print("Text: This was the best movie I've ever seen! \n The sentiment of the text is: ")
# Returns positive sentiment # Returns positive sentiment
complete_and_print("Text: The director was trying too hard. \n The sentiment of the text is: ") complete_and_print("Text: The director was trying too hard. \n The sentiment of the text is: ")
# Returns negative sentiment # Returns negative sentiment
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Few-Shot Prompting #### Few-Shot Prompting
Adding specific examples of your desired output generally results in more accurate, consistent output. This technique is called "few-shot prompting". Adding specific examples of your desired output generally results in more accurate, consistent output. This technique is called "few-shot prompting".
In this example, the generated response follows our desired format that offers a more nuanced sentiment classifer that gives a positive, neutral, and negative response confidence percentage. In this example, the generated response follows our desired format that offers a more nuanced sentiment classifer that gives a positive, neutral, and negative response confidence percentage.
See also: [Zhao et al. (2021)](https://arxiv.org/abs/2102.09690), [Liu et al. (2021)](https://arxiv.org/abs/2101.06804), [Su et al. (2022)](https://arxiv.org/abs/2209.01975), [Rubin et al. (2022)](https://arxiv.org/abs/2112.08633). See also: [Zhao et al. (2021)](https://arxiv.org/abs/2102.09690), [Liu et al. (2021)](https://arxiv.org/abs/2101.06804), [Su et al. (2022)](https://arxiv.org/abs/2209.01975), [Rubin et al. (2022)](https://arxiv.org/abs/2112.08633).
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
def sentiment(text): def sentiment(text):
response = chat_completion(messages=[ response = chat_completion(messages=[
user("You are a sentiment classifier. For each message, give the percentage of positive/netural/negative."), user("You are a sentiment classifier. For each message, give the percentage of positive/netural/negative."),
user("I liked it"), user("I liked it"),
assistant("70% positive 30% neutral 0% negative"), assistant("70% positive 30% neutral 0% negative"),
user("It could be better"), user("It could be better"),
assistant("0% positive 50% neutral 50% negative"), assistant("0% positive 50% neutral 50% negative"),
user("It's fine"), user("It's fine"),
assistant("25% positive 50% neutral 25% negative"), assistant("25% positive 50% neutral 25% negative"),
user(text), user(text),
]) ])
return response return response
def print_sentiment(text): def print_sentiment(text):
print(f'INPUT: {text}') print(f'INPUT: {text}')
print(sentiment(text)) print(sentiment(text))
print_sentiment("I thought it was okay") print_sentiment("I thought it was okay")
# More likely to return a balanced mix of positive, neutral, and negative # More likely to return a balanced mix of positive, neutral, and negative
print_sentiment("I loved it!") print_sentiment("I loved it!")
# More likely to return 100% positive # More likely to return 100% positive
print_sentiment("Terrible service 0/10") print_sentiment("Terrible service 0/10")
# More likely to return 100% negative # More likely to return 100% negative
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Role Prompting ### Role Prompting
Llama will often give more consistent responses when given a role ([Kong et al. (2023)](https://browse.arxiv.org/pdf/2308.07702.pdf)). Roles give context to the LLM on what type of answers are desired. Llama will often give more consistent responses when given a role ([Kong et al. (2023)](https://browse.arxiv.org/pdf/2308.07702.pdf)). Roles give context to the LLM on what type of answers are desired.
Let's use Llama 3 to create a more focused, technical response for a question around the pros and cons of using PyTorch. Let's use Llama 3 to create a more focused, technical response for a question around the pros and cons of using PyTorch.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("Explain the pros and cons of using PyTorch.") complete_and_print("Explain the pros and cons of using PyTorch.")
# More likely to explain the pros and cons of PyTorch covers general areas like documentation, the PyTorch community, and mentions a steep learning curve # More likely to explain the pros and cons of PyTorch covers general areas like documentation, the PyTorch community, and mentions a steep learning curve
complete_and_print("Your role is a machine learning expert who gives highly technical advice to senior engineers who work with complicated datasets. Explain the pros and cons of using PyTorch.") complete_and_print("Your role is a machine learning expert who gives highly technical advice to senior engineers who work with complicated datasets. Explain the pros and cons of using PyTorch.")
# Often results in more technical benefits and drawbacks that provide more technical details on how model layers # Often results in more technical benefits and drawbacks that provide more technical details on how model layers
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Chain-of-Thought ### Chain-of-Thought
Simply adding a phrase encouraging step-by-step thinking "significantly improves the ability of large language models to perform complex reasoning" ([Wei et al. (2022)](https://arxiv.org/abs/2201.11903)). This technique is called "CoT" or "Chain-of-Thought" prompting. Simply adding a phrase encouraging step-by-step thinking "significantly improves the ability of large language models to perform complex reasoning" ([Wei et al. (2022)](https://arxiv.org/abs/2201.11903)). This technique is called "CoT" or "Chain-of-Thought" prompting.
Llama 3.1 now reasons step-by-step naturally without the addition of the phrase. This section remains for completeness. Llama 3.1 now reasons step-by-step naturally without the addition of the phrase. This section remains for completeness.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
prompt = "Who lived longer, Mozart or Elvis?" prompt = "Who lived longer, Mozart or Elvis?"
complete_and_print(prompt) complete_and_print(prompt)
# Llama 2 would often give the incorrect answer of "Mozart" # Llama 2 would often give the incorrect answer of "Mozart"
complete_and_print(f"{prompt} Let's think through this carefully, step by step.") complete_and_print(f"{prompt} Let's think through this carefully, step by step.")
# Gives the correct answer "Elvis" # Gives the correct answer "Elvis"
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Self-Consistency ### Self-Consistency
LLMs are probablistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency ([Wang et al. (2022)](https://arxiv.org/abs/2203.11171)) introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute): LLMs are probablistic, so even with Chain-of-Thought, a single generation might produce incorrect results. Self-Consistency ([Wang et al. (2022)](https://arxiv.org/abs/2203.11171)) introduces enhanced accuracy by selecting the most frequent answer from multiple generations (at the cost of higher compute):
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import re import re
from statistics import mode from statistics import mode
def gen_answer(): def gen_answer():
response = completion( response = completion(
"John found that the average of 15 numbers is 40." "John found that the average of 15 numbers is 40."
"If 10 is added to each number then the mean of the numbers is?" "If 10 is added to each number then the mean of the numbers is?"
"Report the answer surrounded by backticks (example: `123`)", "Report the answer surrounded by backticks (example: `123`)",
) )
match = re.search(r'`(\d+)`', response) match = re.search(r'`(\d+)`', response)
if match is None: if match is None:
return None return None
return match.group(1) return match.group(1)
answers = [gen_answer() for i in range(5)] answers = [gen_answer() for i in range(5)]
print( print(
f"Answers: {answers}\n", f"Answers: {answers}\n",
f"Final answer: {mode(answers)}", f"Final answer: {mode(answers)}",
) )
# Sample runs of Llama-3-70B (all correct): # Sample runs of Llama-3-70B (all correct):
# ['60', '50', '50', '50', '50'] -> 50 # ['60', '50', '50', '50', '50'] -> 50
# ['50', '50', '50', '60', '50'] -> 50 # ['50', '50', '50', '60', '50'] -> 50
# ['50', '50', '60', '50', '50'] -> 50 # ['50', '50', '60', '50', '50'] -> 50
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Retrieval-Augmented Generation ### Retrieval-Augmented Generation
You'll probably want to use factual knowledge in your application. You can extract common facts from today's large models out-of-the-box (i.e. using just the model weights): You'll probably want to use factual knowledge in your application. You can extract common facts from today's large models out-of-the-box (i.e. using just the model weights):
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("What is the capital of the California?") complete_and_print("What is the capital of the California?")
# Gives the correct answer "Sacramento" # Gives the correct answer "Sacramento"
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
However, more specific facts, or private information, cannot be reliably retrieved. The model will either declare it does not know or hallucinate an incorrect answer: However, more specific facts, or private information, cannot be reliably retrieved. The model will either declare it does not know or hallucinate an incorrect answer:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print("What was the temperature in Menlo Park on December 12th, 2023?") complete_and_print("What was the temperature in Menlo Park on December 12th, 2023?")
# "I'm just an AI, I don't have access to real-time weather data or historical weather records." # "I'm just an AI, I don't have access to real-time weather data or historical weather records."
complete_and_print("What time is my dinner reservation on Saturday and what should I wear?") complete_and_print("What time is my dinner reservation on Saturday and what should I wear?")
# "I'm not able to access your personal information [..] I can provide some general guidance" # "I'm not able to access your personal information [..] I can provide some general guidance"
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt you've retrived from an external database ([Lewis et al. (2020)](https://arxiv.org/abs/2005.11401v4)). It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which may be costly and negatively impact the foundational model's capabilities. Retrieval-Augmented Generation, or RAG, describes the practice of including information in the prompt you've retrived from an external database ([Lewis et al. (2020)](https://arxiv.org/abs/2005.11401v4)). It's an effective way to incorporate facts into your LLM application and is more affordable than fine-tuning which may be costly and negatively impact the foundational model's capabilities.
This could be as simple as a lookup table or as sophisticated as a [vector database]([FAISS](https://github.com/facebookresearch/faiss)) containing all of your company's knowledge: This could be as simple as a lookup table or as sophisticated as a [vector database]([FAISS](https://github.com/facebookresearch/faiss)) containing all of your company's knowledge:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
MENLO_PARK_TEMPS = { MENLO_PARK_TEMPS = {
"2023-12-11": "52 degrees Fahrenheit", "2023-12-11": "52 degrees Fahrenheit",
"2023-12-12": "51 degrees Fahrenheit", "2023-12-12": "51 degrees Fahrenheit",
"2023-12-13": "51 degrees Fahrenheit", "2023-12-13": "51 degrees Fahrenheit",
} }
def prompt_with_rag(retrived_info, question): def prompt_with_rag(retrived_info, question):
complete_and_print( complete_and_print(
f"Given the following information: '{retrived_info}', respond to: '{question}'" f"Given the following information: '{retrived_info}', respond to: '{question}'"
) )
def ask_for_temperature(day): def ask_for_temperature(day):
temp_on_day = MENLO_PARK_TEMPS.get(day) or "unknown temperature" temp_on_day = MENLO_PARK_TEMPS.get(day) or "unknown temperature"
prompt_with_rag( prompt_with_rag(
f"The temperature in Menlo Park was {temp_on_day} on {day}'", # Retrieved fact f"The temperature in Menlo Park was {temp_on_day} on {day}'", # Retrieved fact
f"What is the temperature in Menlo Park on {day}?", # User question f"What is the temperature in Menlo Park on {day}?", # User question
) )
ask_for_temperature("2023-12-12") ask_for_temperature("2023-12-12")
# "Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit." # "Sure! The temperature in Menlo Park on 2023-12-12 was 51 degrees Fahrenheit."
ask_for_temperature("2023-07-18") ask_for_temperature("2023-07-18")
# "I'm not able to provide the temperature in Menlo Park on 2023-07-18 as the information provided states that the temperature was unknown." # "I'm not able to provide the temperature in Menlo Park on 2023-07-18 as the information provided states that the temperature was unknown."
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Program-Aided Language Models ### Program-Aided Language Models
LLMs, by nature, aren't great at performing calculations. Let's try: LLMs, by nature, aren't great at performing calculations. Let's try:
$$ $$
((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
$$ $$
(The correct answer is 91383.) (The correct answer is 91383.)
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print(""" complete_and_print("""
Calculate the answer to the following math problem: Calculate the answer to the following math problem:
((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
""") """)
# Gives incorrect answers like 92448, 92648, 95463 # Gives incorrect answers like 92448, 92648, 95463
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
[Gao et al. (2022)](https://arxiv.org/abs/2211.10435) introduced the concept of "Program-aided Language Models" (PAL). While LLMs are bad at arithmetic, they're great for code generation. PAL leverages this fact by instructing the LLM to write code to solve calculation tasks. [Gao et al. (2022)](https://arxiv.org/abs/2211.10435) introduced the concept of "Program-aided Language Models" (PAL). While LLMs are bad at arithmetic, they're great for code generation. PAL leverages this fact by instructing the LLM to write code to solve calculation tasks.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print( complete_and_print(
""" """
# Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5)) # Python code to calculate: ((-5 + 93 * 4 - 0) * (4^4 + -7 + 0 * 5))
""", """,
) )
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# The following code was generated by Llama 3 70B: # The following code was generated by Llama 3 70B:
result = ((-5 + 93 * 4 - 0) * (4**4 - 7 + 0 * 5)) result = ((-5 + 93 * 4 - 0) * (4**4 - 7 + 0 * 5))
print(result) print(result)
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
### Limiting Extraneous Tokens ### Limiting Extraneous Tokens
A common struggle with Llama 2 is getting output without extraneous tokens (ex. "Sure! Here's more information on..."), even if explicit instructions are given to Llama 2 to be concise and no preamble. Llama 3.x can better follow instructions. A common struggle with Llama 2 is getting output without extraneous tokens (ex. "Sure! Here's more information on..."), even if explicit instructions are given to Llama 2 to be concise and no preamble. Llama 3.x can better follow instructions.
Check out this improvement that combines a role, rules and restrictions, explicit instructions, and an example: Check out this improvement that combines a role, rules and restrictions, explicit instructions, and an example:
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
complete_and_print( complete_and_print(
"Give me the zip code for Menlo Park in JSON format with the field 'zip_code'", "Give me the zip code for Menlo Park in JSON format with the field 'zip_code'",
) )
# Likely returns the JSON and also "Sure! Here's the JSON..." # Likely returns the JSON and also "Sure! Here's the JSON..."
complete_and_print( complete_and_print(
""" """
You are a robot that only outputs JSON. You are a robot that only outputs JSON.
You reply in JSON format with the field 'zip_code'. You reply in JSON format with the field 'zip_code'.
Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118} Example question: What is the zip code of the Empire State Building? Example answer: {'zip_code': 10118}
Now here is my question: What is the zip code of Menlo Park? Now here is my question: What is the zip code of Menlo Park?
""", """,
) )
# "{'zip_code': 94025}" # "{'zip_code': 94025}"
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Additional References ## Additional References
- [PromptingGuide.ai](https://www.promptingguide.ai/) - [PromptingGuide.ai](https://www.promptingguide.ai/)
- [LearnPrompting.org](https://learnprompting.org/) - [LearnPrompting.org](https://learnprompting.org/)
- [Lil'Log Prompt Engineering Guide](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/) - [Lil'Log Prompt Engineering Guide](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/)
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
## Author & Contact ## Author & Contact
Edited by [Dalton Flanagan](https://www.linkedin.com/in/daltonflanagan/) (dalton@meta.com) with contributions from Mohsen Agsen, Bryce Bortree, Ricardo Juan Palma Duran, Kaolin Fire, Thomas Scialom. Edited by [Dalton Flanagan](https://www.linkedin.com/in/daltonflanagan/) (dalton@meta.com) with contributions from Mohsen Agsen, Bryce Bortree, Ricardo Juan Palma Duran, Kaolin Fire, Thomas Scialom.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment