400 evaluation tasks testing abstract visual reasoning. Created by François Chollet. Scores near human average (~85%) remained out of reach for LLMs until 2024.
5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | o3API | OpenAI | Mar 2026 | arcprize-leaderboard | 87.50 |
| 02 | o3 (high)API | OpenAI | Mar 2026 | arcprize-leaderboard | 87.50 |
| 03 | o4-miniAPI | OpenAI | Mar 2026 | arcprize-leaderboard | 79 |
| 04 | Gemini 2.5 ProAPI | Mar 2026 | google-technical-report | 56.10 | |
| 05 | Claude 3.7 SonnetAPI | Anthropic | Mar 2026 | arcprize-leaderboard | 30 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.