Skip to content
Snippets Groups Projects
Commit 79028b14 authored by Hamid Shojanazeri's avatar Hamid Shojanazeri
Browse files

update notes for int8 lack of support in FSDP

parent aea8aef0
No related branches found
No related tags found
No related merge requests found
......@@ -81,7 +81,7 @@ Here we make use of Parameter Efficient Methods (PEFT) as described in the next
### Multiple GPUs One Node:
**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP .
**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP. Also, note that int8 quantization from bit&bytes currently is not supported in FSDP.
```bash
......
......@@ -26,6 +26,8 @@ This runs with the `samsum_dataset` for summarization application by default.
**Multiple GPUs one node**:
**NOTE** please make sure to use PyTorch Nightlies for using PEFT+FSDP. Also, note that int8 quantization from bit&bytes currently is not supported in FSDP.
```bash
torchrun --nnodes 1 --nproc_per_node 4 ../llama_finetuning.py --enable_fsdp --model_name /patht_of_model_folder/7B --use_peft --peft_method lora --output_dir Path/to/save/PEFT/model
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment