Codesota · Natural Language Processing · Language Modeling · HellaSwagTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

HellaSwag: Can a Machine Really Finish Your Sentence?.

HellaSwag is a multiple-choice commonsense sentence-completion / commonsense NLI benchmark introduced by Zellers et al. (ACL 2019). Each example provides a short context and four candidate endings; the task is to pick the most plausible continuation. The dataset was constructed using Adversarial Filtering (AF) to select challenging, machine-generated distractors (making examples trivial for humans but difficult for models). Source contexts are drawn from domains such as ActivityNet captions and WikiHow. Standard splits on the Hugging Face / official release are roughly: train ≈ 39.9k, validation 10k, test 10k (≈60k total). Human accuracy reported >95%, while contemporary models at publication time scored substantially lower (paper reports under ~48%).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
HellaSwag — Language Modeling benchmark · Codesota | CodeSOTA