From a695fd7f81abb528d8c50f351fa9689637322931 Mon Sep 17 00:00:00 2001
From: Kevin Slagle <kjslag@gmail.com>
Date: Fri, 17 May 2024 13:03:39 +0200
Subject: [PATCH] fix typos

---
 recipes/finetuning/singlegpu_finetuning.md | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/recipes/finetuning/singlegpu_finetuning.md b/recipes/finetuning/singlegpu_finetuning.md
index 81bf5876..cdfc0393 100644
--- a/recipes/finetuning/singlegpu_finetuning.md
+++ b/recipes/finetuning/singlegpu_finetuning.md
@@ -16,7 +16,7 @@ To run fine-tuning on a single GPU, we will make use of two packages:
 ## How to run it?
 
 ```bash
-python -m finetuning.py  --use_peft --peft_method lora --quantization --use_fp16 --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
+python finetuning.py  --use_peft --peft_method lora --quantization --use_fp16 --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 ```
 The args used in the command above are:
 
@@ -34,7 +34,7 @@ Currently 3 open source datasets are supported that can be found in [Datasets co
 
 * `grammar_dataset` : use this [notebook](../../src/llama_recipes/datasets/grammar_dataset/grammar_dataset_process.ipynb) to pull and process the Jfleg and C4 200M datasets for grammar checking.
 
-* `alpaca_dataset` : to get this open source data please download the `aplaca.json` to `dataset` folder.
+* `alpaca_dataset` : to get this open source data please download the `alpaca.json` to `dataset` folder.
 
 
 ```bash
@@ -46,7 +46,7 @@ wget -P ../../src/llama_recipes/datasets https://raw.githubusercontent.com/tatsu
 to run with each of the datasets set the `dataset` flag in the command as shown below:
 
 ```bash
-# grammer_dataset
+# grammar_dataset
 
 python -m finetuning.py  --use_peft --peft_method lora --quantization  --dataset grammar_dataset --model_name /path_of_model_folder/8B --output_dir Path/to/save/PEFT/model
 
-- 
GitLab