From 637583bf88857fcd6c8e105d50574faa7305e7ad Mon Sep 17 00:00:00 2001
From: JackKuo666 <41313632+JackKuo666@users.noreply.github.com>
Date: Wed, 29 Nov 2023 17:47:14 +0800
Subject: [PATCH] Update support chatglm3

Update support chatglm3
---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index d9c9a22..3e5b479 100644
--- a/README.md
+++ b/README.md
@@ -124,7 +124,7 @@ All data in **LongBench** (LongBench-E) are standardized to the following format
 #### Evaluation
 Install the requirements with pip: `pip install -r requirements.txt`. For Llama-2 based models, we recommend using Flash Attention for optimization and saving GPU memory The relevant dependencies can be installed according to the code base of [Flash Attention](https://github.com/Dao-AILab/flash-attention).
 
-First, run [pred.py](pred.py) and select the model you want to evaluate via `--model`. Let's take ChatGLM2-6B-32k as an example (HuggingFace model weight will be downloaded automatically according to the path in [model2path.json](config/model2path.json), you can change the path in this file to load the model weight from local):
+First, run [pred.py](pred.py) and select the model you want to evaluate via `--model`. Let's take ChatGLM3-6B-32k as an example (HuggingFace model weight will be downloaded automatically according to the path in [model2path.json](config/model2path.json), you can change the path in this file to load the model weight from local):
 ```bash
 CUDA_VISIBLE_DEVICES=0 python pred.py --model chatglm3-6b-32k
 ```
-- 
GitLab