Skip to content
Snippets Groups Projects
Unverified Commit 637583bf authored by JackKuo666's avatar JackKuo666 Committed by GitHub
Browse files

Update support chatglm3

Update support chatglm3
parent 74d39f0b
No related branches found
No related tags found
No related merge requests found
......@@ -124,7 +124,7 @@ All data in **LongBench** (LongBench-E) are standardized to the following format
#### Evaluation
Install the requirements with pip: `pip install -r requirements.txt`. For Llama-2 based models, we recommend using Flash Attention for optimization and saving GPU memory The relevant dependencies can be installed according to the code base of [Flash Attention](https://github.com/Dao-AILab/flash-attention).
First, run [pred.py](pred.py) and select the model you want to evaluate via `--model`. Let's take ChatGLM2-6B-32k as an example (HuggingFace model weight will be downloaded automatically according to the path in [model2path.json](config/model2path.json), you can change the path in this file to load the model weight from local):
First, run [pred.py](pred.py) and select the model you want to evaluate via `--model`. Let's take ChatGLM3-6B-32k as an example (HuggingFace model weight will be downloaded automatically according to the path in [model2path.json](config/model2path.json), you can change the path in this file to load the model weight from local):
```bash
CUDA_VISIBLE_DEVICES=0 python pred.py --model chatglm3-6b-32k
```
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment