Codesota · Models · Claude Opus 4.1Anthropic2 results · 2 benchmarks
Model card

Claude Opus 4.1.

Anthropic
§ 01 · Benchmarks

Every benchmark Claude Opus 4.1 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01MMLU-ProReasoning · Commonsense Reasoningaccuracy88.0%#6/202026-04-20source ↗
02SWE-bench VerifiedAgentic AI · SWE-benchresolve-rate74.5%#22/81source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Claude Opus 4.1 actually performs.

Reasoning
1
benchmark
avg rank #6.0
Agentic AI
1
benchmark
avg rank #22.0
§ 04 · Related models

Other Anthropic models scored on Codesota.

Claude Opus 4
Undisclosed params · 13 results · 2 SOTA
Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude 3.5 Sonnet
Undisclosed params · 27 results
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
§ 05 · Sources & freshness

Where these numbers come from.

pricepertoken
1
result
editorial
1
result
1 of 2 rows marked verified.