Codesota · Models · Claude Opus 4Anthropic22 results · 15 benchmarks
Model card

Claude Opus 4.

AnthropicapiUndisclosed params2 current SOTA
§ 01 · Benchmarks

Every benchmark Claude Opus 4 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01HCASTAgentic AI · HCASTsuccess-rate55.0%#1/62025-04-01source ↗
02METR Time HorizonAgentic AI · Time Horizontask-horizon-minutes60.0%#1/52025-04-01source ↗
03Defects4JComputer Code · Program Repaircorrect-patches89.0%#2/5source ↗
04WebArenaAgentic AI · Web & Desktop Agentssuccess-rate55.0%#3/62025-04-01source ↗
05MBPPComputer Code · Code Generationpass@192.0%#3/19source ↗
06GPQAReasoning · Multi-step Reasoningaccuracy76.7%#11/33source ↗
07GSM8KReasoning · Mathematical Reasoningaccuracy98.0%#11/32source ↗
08HumanEvalComputer Code · Code Generationpass@192.2%#13/42source ↗
09LiveCodeBenchComputer Code · Code Generationpass@157.8%#16/302024-03-12source ↗
10SWE-BenchComputer Code · Code Generationresolve-rate-agentic55.2%#17/252025-03-01unverified
11SWE-Bench VerifiedComputer Code · Code Generationresolve-rate72.5%#17/39source ↗
12MMLUReasoning · Commonsense Reasoningaccuracy88.8%#19/41source ↗
13MATHReasoning · Mathematical Reasoningaccuracy89.2%#20/34source ↗
14SWE-BenchComputer Code · Code Generationresolve-rate55.2%#23/322025-03-01source ↗
15PLCCNatural Language Processing · Polish Cultural Competencygrammar76.0%#30/165source ↗
16PLCCNatural Language Processing · Polish Cultural Competencyhistory87.0%#30/165source ↗
17PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment72.0%#33/165source ↗
18SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate72.5%#33/81source ↗
19PLCCNatural Language Processing · Polish Cultural Competencyvocabulary73.0%#36/165source ↗
20PLCCNatural Language Processing · Polish Cultural Competencyaverage78.7%#36/165source ↗
21PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition81.0%#37/165source ↗
22PLCCNatural Language Processing · Polish Cultural Competencygeography83.0%#50/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Claude Opus 4 actually performs.

Agentic AI
4
benchmarks
avg rank #9.5 · 2 SOTA
Computer Code
6
benchmarks
avg rank #13.0
Reasoning
4
benchmarks
avg rank #15.3
Natural Language Processing
1
benchmark
avg rank #36.0
§ 03 · Papers

3 papers with results for Claude Opus 4.

  1. 2025-04-01· Agentic AI· 3 results

    METR: Measuring Autonomy in AI Systems (2025 Update)

  2. 2024-03-12· Computer Code· 1 result

    LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code

  3. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other Anthropic models scored on Codesota.

Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude 3.5 Sonnet
Undisclosed params · 27 results
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
Claude 3 Opus
5 results
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
official-leaderboard
3
results
official-model-card
3
results
anthropic-model-card
3
results
arxiv
1
result
aider
1
result
anthropic-blog
1
result
anthropic-announcement
1
result
sota-timeline
1
result
editorial
1
result
19 of 22 rows marked verified. · first result 2024-03-12, latest 2025-04-01.