Codesota · Models · o1-miniOpenAI4 results · 4 benchmarks
Model card

o1-mini.

OpenAIapi
§ 01 · Benchmarks

Every benchmark o1-mini has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01HumanEvalComputer Code · Code Generationpass@192.4%#11/42source ↗
02MATHReasoning · Mathematical Reasoningaccuracy90.0%#18/34source ↗
03GPQAReasoning · Multi-step Reasoningaccuracy60.0%#23/33source ↗
04MMLUReasoning · Commonsense Reasoningaccuracy85.2%#37/41source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where o1-mini actually performs.

Computer Code
1
benchmark
avg rank #11.0
Reasoning
3
benchmarks
avg rank #26.0
§ 04 · Related models

Other OpenAI models scored on Codesota.

GPT-4o
Undisclosed params · 35 results · 9 SOTA
o3
16 results · 5 SOTA
o4-mini
13 results · 3 SOTA
o3 (high)
2 results · 1 SOTA
o4-mini (high)
1 result · 1 SOTA
o1
11 results
GPT-5
8 results
o1-preview
Undisclosed params · 8 results
§ 05 · Sources & freshness

Where these numbers come from.

openai-simple-evals
4
results
0 of 4 rows marked verified.