Codesota · Natural Language Processing · Language Modeling · BBHTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

BIG-Bench Hard (BBH).

BIG-Bench Hard (BBH) is a curated subset of challenging tasks from the BIG-Bench benchmark selected because prior language model evaluations underperformed average human raters. Introduced in "Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them" (Suzgun et al., 2022/ACL Findings 2023), BBH comprises 23 diverse, hard reasoning and understanding tasks (examples: boolean_expressions, logical_deduction_three/five/seven_objects, dyck_languages, multistep_arithmetic_two, object_counting, tracking_shuffled_objects, salient_translation_error_detection, etc.). BBH is explicitly evaluated with few-shot and chain-of-thought (CoT) prompting to study whether CoT helps solve these harder tasks. The suite is commonly distributed on Hugging Face and GitHub as "BIG-Bench Hard" and is widely used as a benchmark for advanced reasoning capabilities.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies