Elo-rated competitive programming benchmark built from continuously-updated Codeforces, ICPC, and IOI problems. Each LLM is treated as a virtual Codeforces contestant; ratings are fit via Bayesian MAP Elo on the standard Codeforces scale (~800 novice to ~3800 top human). Built by Olympiad medalists to limit contamination.
9 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | elo |
|---|---|---|---|---|---|
| 01 | Gemini 3 ProAPI | Apr 2026 | livecodebench-pro-official | 2439 | |
| 02 | GPT-5API | OpenAI | Apr 2026 | livecodebench-pro-official | 2176 |
| 03 | o4-miniAPI | OpenAI | Apr 2026 | livecodebench-pro-official | 2092 |
| 04 | Gemini 2.5 Pro | Apr 2026 | livecodebench-pro-official | 1769 | |
| 05 | Qwen3-235B-A22B | Alibaba | Apr 2026 | livecodebench-pro-official | 1673 |
| 06 | Claude Sonnet 4.5API | Anthropic | Apr 2026 | livecodebench-pro-official | 1412 |
| 07 | Gemini 2.5 Flash | Apr 2026 | livecodebench-pro-official | 1288 | |
| 08 | DeepSeek R1OSS | DeepSeek | Apr 2026 | livecodebench-pro-official | 1161 |
| 09 | o3API | OpenAI | Apr 2026 | livecodebench-pro-official | 1010 |
Each row below marks a model that broke the previous record on elo. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.