Codesota · Natural Language Processing · Language Modeling · CommonsenseQATasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

CommonsenseQA.

CommonsenseQA is a multiple-choice question-answering benchmark that tests commonsense/world knowledge. Questions were created by crowdworkers based on ConceptNet relations: for a source concept the authors extracted multiple target concepts that share a semantic relation, and workers authored questions that mention the source concept and discriminate among the targets. The set contains roughly 12k questions (paper reports 12,247 questions; the Hugging Face dataset card lists 12,102) with one correct answer and four distractors (5-way multiple choice). The dataset includes standard train/validation/test splits (see paper) and was shown to be challenging for strong baselines (BERT-large baseline ~56% vs. human ~89% per the original paper).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies