Model card
GPT-4 Turbo.
OpenAIapiUndisclosed params
§ 01 · Benchmarks
Every benchmark GPT-4 Turbo has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | HumanEval | Computer Code · Code Generation | pass@1 | 88.2% | #24 | — | source ↗ |
| 02 | HumanEval | Computer Code · Code Generation | pass@1 | 86.6% | #28 | 2023-11-01 | source ↗ |
| 03 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 49.3% | #29 | — | source ↗ |
| 04 | MATH | Reasoning · Mathematical Reasoning | accuracy | 73.4% | #29 | — | source ↗ |
| 05 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 86.7% | #30 | — | source ↗ |
| 06 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 12.5% | #31 | 2024-03-01 | source ↗ |
| 07 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 74.0% | #57 | — | source ↗ |
| 08 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 61.0% | #61 | — | source ↗ |
| 09 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 79.0% | #61 | — | source ↗ |
| 10 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 67.0% | #72 | — | source ↗ |
| 11 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 76.0% | #73 | — | source ↗ |
| 12 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 56.0% | #81 | — | source ↗ |
| 13 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 56.0% | #93 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where GPT-4 Turbo actually performs.
§ 03 · Papers
1 paper with results for GPT-4 Turbo.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other OpenAI models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
openai-simple-evals
4
results
shadow-page-humaneval
1
result
sota-timeline
1
result
9 of 13 rows marked verified. · first result 2023-11-01, latest 2024-03-01.