7,787 science questions requiring reasoning. Challenge set contains harder questions that retrieval fails on.
10 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | o3API | OpenAI | Mar 2026 | openai-simple-evals | 98.10 |
| 02 | Gemini 2.5 ProAPI | Mar 2026 | google-technical-report | 97.80 | |
| 03 | Llama-4-MaverickOSS | Meta | Mar 2026 | meta-blog | 97.40 |
| 04 | o4-miniAPI | OpenAI | Mar 2026 | openai-simple-evals | 97.30 |
| 05 | DeepSeek R1OSS | DeepSeek | Mar 2026 | arxiv | 97.10 |
| 06 | Llama 3.1 405BOSS | Meta | Mar 2026 | meta-modelcard | 96.90 |
| 07 | Claude 3.5 SonnetAPI | Anthropic | Dec 2025 | anthropic-blog | 96.70 |
| 08 | GPT-4oAPI | OpenAI | Dec 2025 | openai-blog | 96.40 |
| 09 | Gemini 1.5 ProAPI | Dec 2025 | google-blog | 94.80 | |
| 10 | Llama 3 70BOSS | Meta | Dec 2025 | meta-blog | 93 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.