Skip to content
Snippets Groups Projects
Commit 9c95ed4b authored by Hamid Shojanazeri's avatar Hamid Shojanazeri
Browse files

clean up

parent 311a5c1e
No related branches found
No related tags found
No related merge requests found
...@@ -50,7 +50,7 @@ Then convert your FSDP checkpoint to HuggingFace checkpoints using: ...@@ -50,7 +50,7 @@ Then convert your FSDP checkpoint to HuggingFace checkpoints using:
# --HF_model_path_or_name specifies the HF Llama model name or path where it has config.json and tokenizer.json # --HF_model_path_or_name specifies the HF Llama model name or path where it has config.json and tokenizer.json
``` ```
By default, training parameter are saved in train_params.yaml in the path where FSDP checkpoints are saved, in the converter script we frist try to find the HugingFace model name used in the fine-tuning to load the model with configs from there, if not found user need to provide it. By default, training parameter are saved in `train_params.yaml` in the path where FSDP checkpoints are saved, in the converter script we frist try to find the HugingFace model name used in the fine-tuning to load the model with configs from there, if not found user need to provide it.
Then run inference using: Then run inference using:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment