LongBench-Chat is a benchmark for evaluating instruction-following capabilities of large language models on queries of 10k-100k in length. It was introduced in the LongAlign paper to test how well models can follow instructions over very long contexts.
Score (1 10) is the reported evaluation metric for LongBench-Chat. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.
Higher is better
| Rank | Model | Trust | Score | Year | Source |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | paper | 8.72 | N/A | Source ↗ |