Model card
o1-preview.
OpenAIapiUndisclosed paramsReasoning LLM
OpenAI's reasoning-focused model.
§ 01 · Benchmarks
Every benchmark o1-preview has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | AIME 2024 | Reasoning · Mathematical Reasoning | accuracy | 83.3% | #4 | — | source ↗ |
| 02 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 90.8% | #8 | 2024-09-12 | source ↗ |
| 03 | HumanEval | Computer Code · Code Generation | pass@1 | 92.4% | #11 | — | source ↗ |
| 04 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 97.8% | #12 | — | source ↗ |
| 05 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 73.3% | #15 | — | source ↗ |
| 06 | MATH | Reasoning · Mathematical Reasoning | accuracy | 85.5% | #23 | — | source ↗ |
| 07 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 36.2% | #25 | 2024-10-01 | source ↗ |
| 08 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 41.3% | #71 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where o1-preview actually performs.
§ 03 · Papers
1 paper with results for o1-preview.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
openai-simple-evals
4
results
openai-blog
2
results
sota-timeline
1
result
editorial
1
result
3 of 8 rows marked verified. · first result 2024-09-12, latest 2024-10-01.