Model card
GPT-4.1.
OpenAIapi
§ 01 · Benchmarks
Every benchmark GPT-4.1 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | MBPP | Computer Code · Code Generation | pass@1 | 90.9% | #4 | — | source ↗ |
| 02 | HumanEval | Computer Code · Code Generation | pass@1 | 94.5% | #6 | — | source ↗ |
| 03 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 90.2% | #13 | 2025-04-14 | source ↗ |
| 04 | LiveCodeBench | Computer Code · Code Generation | pass@1 | 54.4% | #17 | 2024-03-12 | source ↗ |
| 05 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 66.3% | #22 | — | source ↗ |
| 06 | MATH | Reasoning · Mathematical Reasoning | accuracy | 82.1% | #25 | — | source ↗ |
| 07 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 54.6% | #31 | — | source ↗ |
| 08 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 54.6% | #62 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GPT-4.1 actually performs.
§ 03 · Papers
1 paper with results for GPT-4.1.
- 2024-03-12· Computer Code· 1 result
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
openai-simple-evals
4
results
official-model-card
1
result
official-leaderboard
1
result
swebench-leaderboard
1
result
editorial
1
result
4 of 8 rows marked verified. · first result 2024-03-12, latest 2025-04-14.