Codesota · Natural Language Processing · Language Modeling · LongBench v2Tasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks.

LongBench v2 is a long-context benchmark designed to evaluate large language models’ ability to perform deep understanding and reasoning across realistic long-context multitasks. The benchmark contains 503 challenging multiple-choice questions with contexts ranging from ~8k to 2M words (majority under ~128k). It covers six major categories: single-document QA, multi-document QA, long in-context learning, long-dialogue history understanding, code-repository understanding, and long structured-data understanding. The authors provide evaluation modes with and without chain-of-thought (CoT) reasoning and categorize examples by short/medium/long context lengths to measure model performance as context size grows. Data and code are available from the project page and the Hugging Face dataset repository; the dataset is tagged for multiple-choice, question-answering, text-classification, and table-question-answering tasks.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies