HellaSwag is a multiple-choice commonsense sentence-completion / commonsense NLI benchmark introduced by Zellers et al. (ACL 2019). Each example provides a short context and four candidate endings; the task is to pick the most plausible continuation. The dataset was constructed using Adversarial Filtering (AF) to select challenging, machine-generated distractors (making examples trivial for humans but difficult for models). Source contexts are drawn from domains such as ActivityNet captions and WikiHow. Standard splits on the Hugging Face / official release are roughly: train ≈ 39.9k, validation 10k, test 10k (≈60k total). Human accuracy reported >95%, while contemporary models at publication time scored substantially lower (paper reports under ~48%).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.