Codesota · Reasoning · Multi-step Reasoning · BIG-Bench HardTasks/Reasoning/Multi-step Reasoning
Multi-step Reasoning · benchmark dataset · 2022 · EN

BIG-Bench Hard (BBH).

BIG-Bench Hard is a curated subset of 23 challenging tasks from BIG-Bench that require multi-step reasoning, where chain-of-thought prompting significantly helps performance. Tasks include algorithmic reasoning, logical deduction, causal judgment, and more. By 2024–2025, frontier models were approaching saturation (>90%) on BBH, prompting the creation of the harder BBEH variant.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
accuracy · higher is better
accuracy· primary
5 rows
#ModelOrgSubmittedPaper / codeaccuracy
01Claude 3.5 SonnetAPIAnthropicMar 2026llm-stats-bbh93.10
02Gemini 1.5 ProAPIGoogleMar 2026llm-stats-bbh89.20
03Gemma-3-27bGoogleMar 2026llm-stats-bbh87.60
04Claude 3 OpusAPIAnthropicMar 2026llm-stats-bbh86.80
05Llama 3.1 405BOSSMetaMar 2026llm-stats-bbh85.90
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · accuracy
  1. Mar 28, 2026Claude 3.5 SonnetAnthropic93.10
Fig 3 · SOTA-setting models only. 1 entries span Mar 2026 Mar 2026.
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies