Codesota · Benchmark · LongBench-ChatHome/Leaderboards/Language & Knowledge/Language Modeling/LongBench-Chat
Unknown

LongBench-Chat.

LongBench-Chat is a benchmark for evaluating instruction-following capabilities of large language models on queries of 10k-100k in length. It was introduced in the LongAlign paper to test how well models can follow instructions over very long contexts.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Only 1 model on this benchmark
Help build the community leaderboard — submit your model results.

Score (1 10)

Score (1 10) is the reported evaluation metric for LongBench-Chat. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for Score (1 10)verifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01Qwen2.5-72B-Instruct
dataset: LongBench-Chat; task: 5
paper8.72N/ASource ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Language Modeling