Model card
Claude 3.5 Sonnet.
AnthropicapiUndisclosed paramsMultimodal LLMProprietary
Anthropic Claude 3.5 Sonnet, released June 2024.
§ 01 · Benchmarks
Every benchmark Claude 3.5 Sonnet has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | BIG-Bench Hard | Reasoning · Multi-step Reasoning | accuracy | 93.1% | #1 | — | source ↗ |
| 02 | BixBench | Agentic AI · Bioinformatics Agents | accuracy | 17.0% | #1 | — | source ↗ |
| 03 | CommonsenseQA | Reasoning · Commonsense Reasoning | accuracy | 83.2% | #2 | — | source ↗ |
| 04 | HotpotQA | Reasoning · Multi-step Reasoning | f1 | 68.5% | #2 | — | source ↗ |
| 05 | LogiQA | Reasoning · Logical Reasoning | accuracy | 53.8% | #2 | — | source ↗ |
| 06 | MAWPS | Reasoning · Arithmetic Reasoning | accuracy | 95.8% | #2 | — | source ↗ |
| 07 | ReClor | Reasoning · Logical Reasoning | accuracy | 68.9% | #2 | — | source ↗ |
| 08 | SVAMP | Reasoning · Arithmetic Reasoning | accuracy | 91.2% | #2 | — | source ↗ |
| 09 | StrategyQA | Reasoning · Multi-step Reasoning | accuracy | 79.8% | #2 | — | source ↗ |
| 10 | WinoGrande | Reasoning · Commonsense Reasoning | accuracy | 85.4% | #2 | — | source ↗ |
| 11 | CC-OCR | Computer Vision · General OCR Capabilities | kie-f1 | 64.6% | #3 | — | source ↗ |
| 12 | HellaSwag | Reasoning · Commonsense Reasoning | accuracy | 89.0% | #3 | — | source ↗ |
| 13 | RE-Bench | Agentic AI · RE-Bench | normalized-score | 0.1% | #4 | 2024-11-22 | source ↗ |
| 14 | SNLI | Natural Language Processing · Natural Language Inference | accuracy | 91.8% | #4 | 2024-06-20 | source ↗ |
| 15 | SQuAD v2.0 | Natural Language Processing · Question Answering | f1 | 90.2% | #4 | 2024-06-20 | source ↗ |
| 16 | CC-OCR | Computer Vision · General OCR Capabilities | document-parsing | 47.8% | #4 | — | source ↗ |
| 17 | CC-OCR | Computer Vision · General OCR Capabilities | multilingual-f1 | 65.7% | #4 | — | source ↗ |
| 18 | HCAST | Agentic AI · HCAST | success-rate | 18.0% | #5 | 2025-04-01 | source ↗ |
| 19 | CC-OCR | Computer Vision · General OCR Capabilities | multi-scene-f1 | 72.9% | #5 | — | source ↗ |
| 20 | ARC-Challenge | Reasoning · Commonsense Reasoning | accuracy | 96.7% | #7 | — | source ↗ |
| 21 | MBPP | Computer Code · Code Generation | pass@1 | 89.2% | #10 | — | source ↗ |
| 22 | MMMU | Multimodal · Visual Question Answering | accuracy | 68.3% | #12 | 2024-10-22 | source ↗ |
| 23 | HumanEval | Computer Code · Code Generation | pass@1 | 92.0% | #14 | — | source ↗ |
| 24 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 96.4% | #17 | — | source ↗ |
| 25 | SWE-Bench | Computer Code · Code Generation | resolve-rate-agentic | 49.0% | #18 | 2024-12-01 | |
| 26 | GSM8K | Reasoning · Mathematical Reasoning | accuracy | 95.0% | #20 | 2024-07-01 | source ↗ |
| 27 | MMLU | Reasoning · Commonsense Reasoning | accuracy | 88.3% | #23 | — | source ↗ |
| 28 | GPQA | Reasoning · Multi-step Reasoning | accuracy | 59.4% | #24 | — | source ↗ |
| 29 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 27.0% | #27 | 2024-08-01 | source ↗ |
| 30 | MATH | Reasoning · Mathematical Reasoning | accuracy | 71.1% | #30 | — | source ↗ |
| 31 | SWE-Bench Verified | Computer Code · Code Generation | resolve-rate | 50.8% | #32 | — | source ↗ |
| 32 | SWE-bench Verified | Agentic AI · SWE-bench | resolve-rate | 49.0% | #67 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Claude 3.5 Sonnet actually performs.
§ 03 · Papers
6 papers with results for Claude 3.5 Sonnet.
- 2025-04-01· Agentic AI· 1 result
METR: Measuring Autonomy in AI Systems (2025 Update)
- 2025-02-28· Agentic AI· 1 result
BixBench: a Comprehensive Benchmark for LLM-based Agents in Computational Biology
Ludovico Mitchener, Jon M Laurent, Alex Andonian, Benjamin Tenmann et al. - 2024-11-22· Agentic AI· 1 result
RE-Bench: Evaluating Frontier AI R&D Capabilities of Language Model Agents Against Human Experts
- 2024-10-22· Multimodal· 1 result
Claude 3.5 Sonnet Model Card
- 2024-06-20· Natural Language Processing· 2 results
Claude 3.5 Sonnet Model Card
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Anthropic models scored on Codesota.
Claude Opus 4
Undisclosed params · 13 results · 2 SOTA
Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
Claude 3 Opus
5 results
§ 05 · Sources & freshness
Where these numbers come from.
anthropic-blog
7
results
arxiv-paper
6
results
arxiv
4
results
openai-simple-evals
4
results
alphaxiv-leaderboard
2
results
cc-ocr-paper
2
results
llm-stats-bbh
1
result
research-paper
1
result
official-leaderboard
1
result
anthropic-internal
1
result
gsm8k-shadow-page
1
result
sota-timeline
1
result
editorial
1
result
10 of 32 rows marked verified. · first result 2024-06-20, latest 2025-04-01.