Skip to content
Snippets Groups Projects
Unverified Commit 16a00e05 authored by Ravi Theja's avatar Ravi Theja Committed by GitHub
Browse files

Update colab notebook (#12247)

parent e825f669
No related branches found
No related tags found
No related merge requests found
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
# RAFT Dataset LlamaPack # RAFT Dataset LlamaPack
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raft-dataset/examples/raft_dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raft-dataset/examples/raft_dataset.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This LlamaPack implements RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131) This LlamaPack implements RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131)
Retrieval Augmented FineTuning (RAFT) is a training recipe introduced in this paper that aims to improve the performance of large language models (LLMs) in open-book, in-domain question-answering tasks. Given a question and a set of retrieved documents, RAFT trains the LLM to identify and cite verbatim the most relevant sequences from the documents that help answer the question, while ignoring irrelevant or distracting information. By explicitly training the model to distinguish between relevant and irrelevant information and to provide evidence from the relevant documents, RAFT encourages the LLM to develop better reasoning and explanation abilities, ultimately improving its ability to answer questions accurately and rationally in scenarios where additional context or knowledge is available. Retrieval Augmented FineTuning (RAFT) is a training recipe introduced in this paper that aims to improve the performance of large language models (LLMs) in open-book, in-domain question-answering tasks. Given a question and a set of retrieved documents, RAFT trains the LLM to identify and cite verbatim the most relevant sequences from the documents that help answer the question, while ignoring irrelevant or distracting information. By explicitly training the model to distinguish between relevant and irrelevant information and to provide evidence from the relevant documents, RAFT encourages the LLM to develop better reasoning and explanation abilities, ultimately improving its ability to answer questions accurately and rationally in scenarios where additional context or knowledge is available.
A key component of RAFT is how the dataset is generated for fine-tuning. Each QA pair also includes an "oracle" document from which the answer to the question can be deduced as well as "distractor" documents which are irrelevant. During training this forces the model to learn which information is relevant/irrelevant and also memorize domain knowledge.
In this notebook we will create `RAFT Dataset` using `RAFTDatasetPack` LlamaPack.
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Installation #### Installation
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!pip install llama-index !pip install llama-index
!pip install llama-index-packs-raft-dataset
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
import os import os
os.environ["OPENAI_API_KEY"] = "sk-" os.environ["OPENAI_API_KEY"] = "<YOUR OPENAI API KEY>"
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### Download Data #### Download Data
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
!wget --user-agent "Mozilla" "https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt" -O './paul_graham_essay.txt' !wget --user-agent "Mozilla" "https://raw.githubusercontent.com/run-llama/llama_index/main/docs/docs/examples/data/paul_graham/paul_graham_essay.txt" -O './paul_graham_essay.txt'
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
from llama_index.packs.raft_dataset import RAFTDatasetPack from llama_index.packs.raft_dataset import RAFTDatasetPack
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
raft_dataset = RAFTDatasetPack("./paul_graham_essay.txt") raft_dataset = RAFTDatasetPack("./paul_graham_essay.txt")
``` ```
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
# Beware of the costs invloved. This will use GPT-4 for dataset creation. # Beware of the costs invloved. This will use GPT-4 for dataset creation.
# It will also take long time based on the file size. # It will also take long time based on the file size.
dataset = raft_dataset.run() dataset = raft_dataset.run()
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
The above dataset is HuggingFace Dataset format. You can then save it into `.arrow` or `.jsonl` format and use it for further finetuning. The above dataset is HuggingFace Dataset format. You can then save it into `.arrow` or `.jsonl` format and use it for further finetuning.
%% Cell type:code id: tags: %% Cell type:code id: tags:
``` python ``` python
output_path = "<output path>" output_path = "<output path>"
# Save as .arrow format # Save as .arrow format
dataset.save_to_disk(output_path) dataset.save_to_disk(output_path)
# Save as .jsonl format # Save as .jsonl format
dataset.to_json(output_path + ".jsonl") dataset.to_json(output_path + ".jsonl")
``` ```
%% Cell type:markdown id: tags: %% Cell type:markdown id: tags:
#### You can refer to the original implementation [here](https://github.com/ShishirPatil/gorilla/tree/main/raft) #### You can refer to the original implementation [here](https://github.com/ShishirPatil/gorilla/tree/main/raft)
......
...@@ -29,7 +29,7 @@ license = "MIT" ...@@ -29,7 +29,7 @@ license = "MIT"
maintainers = ["ravi-theja"] maintainers = ["ravi-theja"]
name = "llama-index-packs-raft-dataset" name = "llama-index-packs-raft-dataset"
readme = "README.md" readme = "README.md"
version = "0.1.1" version = "0.1.2"
[tool.poetry.dependencies] [tool.poetry.dependencies]
python = ">=3.8.1,<4.0" python = ">=3.8.1,<4.0"
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment