Olympiad-style short-answer math benchmark used by reasoning-model releases. Small test set, so score swings should be read with caution.
5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | o4-miniAPI | OpenAI | Mar 2026 | openai-system-card | 92.70 |
| 02 | o3API | OpenAI | Mar 2026 | openai-system-card | 86.70 |
| 03 | Gemini 2.5 ProAPI | Mar 2026 | google-technical-report | 86.70 | |
| 04 | Claude Opus 4.5API | Anthropic | Mar 2026 | anthropic-model-card | 80 |
| 05 | DeepSeek R1OSS | DeepSeek | Mar 2026 | arxiv | 72 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.