Codesota · Models · o3-miniOpenAI8 results · 8 benchmarks
Model card

o3-mini.

OpenAIapi
§ 01 · Benchmarks

Every benchmark o3-mini has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01HumanEvalComputer Code · Code Generationpass@196.3%#2/42source ↗
02MBPPComputer Code · Code Generationpass@193.3%#2/19source ↗
03MATHReasoning · Mathematical Reasoningaccuracy97.9%#3/34source ↗
04LiveCodeBenchComputer Code · Code Generationpass@166.9%#9/302024-03-12source ↗
05GPQAReasoning · Multi-step Reasoningaccuracy74.9%#13/33source ↗
06SWE-Bench VerifiedComputer Code · Code Generationresolve-rate55.8%#30/39source ↗
07MMLUReasoning · Commonsense Reasoningaccuracy85.9%#35/41source ↗
08SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate49.3%#66/81source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where o3-mini actually performs.

Computer Code
4
benchmarks
avg rank #10.8
Reasoning
3
benchmarks
avg rank #17.0
Agentic AI
1
benchmark
avg rank #66.0
§ 03 · Papers

1 paper with results for o3-mini.

  1. 2024-03-12· Computer Code· 1 result

    LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

openai-simple-evals
4
results
official-model-card
1
result
official-leaderboard
1
result
swebench-leaderboard
1
result
editorial
1
result
3 of 8 rows marked verified.