Codesota · Natural Language Processing · Language Modeling · ARCTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

AI2 Reasoning Challenge (ARC).

The AI2 Reasoning Challenge (ARC) is a benchmark of 7,787 natural, grade-school-level multiple-choice science questions (authored for human tests) designed to encourage research in advanced question answering and reasoning. The question set is partitioned into two subsets: ARC-Challenge (questions that simple retrieval and word co-occurrence algorithms get wrong; ~2.59k questions) and ARC-Easy (~5.2k questions). The release also includes the ARC Corpus, a large corpus of science-relevant sentences (~14 million sentences) intended to support retrieval/knowledge components. ARC focuses on questions requiring deeper knowledge and reasoning than many earlier QA datasets and provides baseline implementations; it is widely used for multiple-choice and open-domain QA evaluation. License: CC BY-SA 4.0. Language: English.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies