BIG-Bench Hard is a curated subset of 23 challenging tasks from BIG-Bench that require multi-step reasoning, where chain-of-thought prompting significantly helps performance. Tasks include algorithmic reasoning, logical deduction, causal judgment, and more. By 2024–2025, frontier models were approaching saturation (>90%) on BBH, prompting the creation of the harder BBEH variant.
5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | Claude 3.5 SonnetAPI | Anthropic | Mar 2026 | llm-stats-bbh | 93.10 |
| 02 | Gemini 1.5 ProAPI | Mar 2026 | llm-stats-bbh | 89.20 | |
| 03 | Gemma-3-27b | Mar 2026 | llm-stats-bbh | 87.60 | |
| 04 | Claude 3 OpusAPI | Anthropic | Mar 2026 | llm-stats-bbh | 86.80 |
| 05 | Llama 3.1 405BOSS | Meta | Mar 2026 | llm-stats-bbh | 85.90 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.