GPQA
Unknown
448 expert-level questions in biology, physics, and chemistry. Designed to be unsearchable.
Benchmark Stats
Models17
Papers17
Metrics1
SOTA History
Coming SoonVisual timeline of state-of-the-art progression over time will appear here.
accuracy
accuracy
Higher is better
| Rank | Model | Code | Score | Paper / Source |
|---|---|---|---|---|
| 1 | o3 | - | 82.8 | openai-simple-evals |
| 2 | o4-mini | - | 77.6 | openai-simple-evals |
| 3 | o1 | - | 75.7 | openai-simple-evals |
| 4 | o3-mini | - | 74.9 | openai-simple-evals |
| 5 | o1-preview | - | 73.3 | openai-simple-evals |
| 6 | gpt-45-preview | - | 69.5 | openai-simple-evals |
| 7 | gpt-41 | - | 66.3 | openai-simple-evals |
| 8 | o1-mini | - | 60 | openai-simple-evals |
| 9 | claude-35-sonnet | - | 59.4 | openai-simple-evals |
| 10 | grok-2 | - | 56 | openai-simple-evals |
| 11 | llama-31-405b | - | 50.7 | openai-simple-evals |
| 12 | claude-3-opus | - | 50.4 | openai-simple-evals |
| 13 | gpt-4o | - | 49.9 | openai-simple-evals |
| 14 | gpt-4-turbo | - | 49.3 | openai-simple-evals |
| 15 | gemini-15-pro | - | 46.2 | openai-simple-evals |
| 16 | llama-31-70b | - | 41.7 | openai-simple-evals |
| 17 | gpt-4o-mini | - | 40.2 | openai-simple-evals |