Harder MMMU variant with vision-only questions and ten answer choices — fixes the text-only shortcuts readers exploited in the original.
5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | Gemini 3.1 ProAPI | Anthropic/OpenAI | Mar 2026 | artificialanalysis.ai | 82 |
| 02 | GPT-5.2API | OpenAI | Dec 2025 | artificialanalysis.ai | 81 |
| 03 | Gemini 3 ProAPI | Jan 2026 | artificialanalysis.ai | 80 | |
| 04 | GPT-5.1 | OpenAI | Nov 2025 | artificialanalysis.ai | 76.50 |
| 05 | Qwen3.6 Plus | Alibaba Cloud | Mar 2026 | artificialanalysis.ai | 73.80 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.