From 79028b14b3d4c094f0e865da01e39441e78928d7 Mon Sep 17 00:00:00 2001
From: Hamid Shojanazeri <hamid.nazeri2010@gmail.com>
Date: Tue, 18 Jul 2023 23:48:20 +0000
Subject: [PATCH] update notes for int8 lack of support in FSDP

---
 README.md         | 2 +-
 docs/mutli_gpu.md | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/README.md b/README.md
index ff6adee3..90800bcd 100644
--- a/README.md
+++ b/README.md
@@ -81,7 +81,7 @@ Here we make use of Parameter Efficient Methods (PEFT) as described in the next
 
 ### Multiple GPUs One Node:
 
-**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP .
+**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP. Also, note that int8 quantization from bit&bytes currently is not supported in FSDP.
 
 ```bash
 
diff --git a/docs/mutli_gpu.md b/docs/mutli_gpu.md
index b0ca2e9f..5695ccf5 100644
--- a/docs/mutli_gpu.md
+++ b/docs/mutli_gpu.md
@@ -26,6 +26,8 @@ This runs with the `samsum_dataset` for summarization application by default.
 
 **Multiple GPUs one node**:
 
+**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP. Also, note that int8 quantization from bit&bytes currently is not supported in FSDP.
+
 ```bash
 
 torchrun --nnodes 1 --nproc_per_node 4  ../llama_finetuning.py --enable_fsdp --model_name /patht_of_model_folder/7B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model
-- 
GitLab