Measures the length of tasks AI agents can reliably complete autonomously. Task horizon is the 50th-percentile task length at 50% success. Higher = agent can handle longer multi-step tasks without human intervention.
5 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | task-horizon-minutes |
|---|---|---|---|---|---|
| 01 | Claude Opus 4API | Anthropic | Apr 2025 | METR: Measuring Autonomy in AI Systems (2025 Update) | 60 |
| 02 | o3API | OpenAI | Apr 2025 | METR: Measuring Autonomy in AI Systems (2025 Update) | 30 |
| 03 | Claude 3.7 SonnetAPI | Anthropic | Apr 2025 | METR: Measuring Autonomy in AI Systems (2025 Update) | 14 |
| 04 | o1API | OpenAI | Apr 2025 | METR: Measuring Autonomy in AI Systems (2025 Update) | 4.00 |
| 05 | GPT-4 Turbo (2024) | OpenAI | Apr 2025 | METR: Measuring Autonomy in AI Systems (2025 Update) | 2.00 |
Each row below marks a model that broke the previous record on task-horizon-minutes. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.