From db05e13f0ab41025bc739eb3439cc765a74d41b4 Mon Sep 17 00:00:00 2001
From: Jack Tang <73820234+McJackTang@users.noreply.github.com>
Date: Tue, 15 Aug 2023 16:09:23 +0800
Subject: [PATCH] Update README.md

---
 README.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/README.md b/README.md
index 4a7757e..dad6aaf 100644
--- a/README.md
+++ b/README.md
@@ -23,13 +23,13 @@ LongBench includes 13 English tasks, 5 Chinese tasks, and 2 code tasks, with the
 | Code Completion | - | - | 2 |
 
 ## 🔍 Table of Contents
-- [Leaderboard](#%F0%9F%96%A5%EF%B8%8F%20Leaderboard)
+- [Leaderboard](#%F0%9F%96%A5%EF%B8%8FLeaderboard)
 - [How to evaluate on LongBench](#how-to-evaluate-on-LongBench)
 - [Evaluation Result on Each Dataset](#evaluation-result-on-each-dataset)
 - [Acknowledgement](#acknowledgement)
 - [Citation](#citation)
 
-## 🖥️ Leaderboard
+## 🖥️Leaderboard
 Here is the average scores (%) on the main task categories in both Chinese and English languages under the Zero-shot scenario. Please refer to this [link](task.md) for the evaluation metrics used for each task.
 
 > Note: For text exceeding the processing length capability of the model, we truncate from the middle of the text, preserving information from the beginning and end, in accordance with the observations from [Lost in the Middle](https://arxiv.org/abs/2307.03172). Experiments show that this truncation method has the least impact on model performance.
-- 
GitLab