Model card
Claude 2.
AnthropicapiUndisclosed params
§ 01 · Benchmarks
Every benchmark Claude 2 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | SWE-Bench | Computer Code · Code Generation | resolve-rate-agentic | 2.0% | #25 | 2023-10-01 | |
| 02 | SWE-Bench | Computer Code · Code Generation | resolve-rate | 2.0% | #32 | 2023-10-01 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for Claude 2.
- 2023-10-10· Computer Code· 1 result
SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models
Other Anthropic models scored on Codesota.
Claude Opus 4
Undisclosed params · 13 results · 2 SOTA
Claude Opus 4.5
3 results · 2 SOTA
Claude Sonnet 5
Undisclosed params · 2 results · 2 SOTA
Claude Sonnet 4
10 results · 1 SOTA
Claude Mythos Preview
1 result · 1 SOTA
Claude 3.5 Sonnet
Undisclosed params · 27 results
Claude Opus 4.5
Undisclosed params · 13 results
Claude 3.7 Sonnet
10 results
§ 05 · Sources & freshness
Where these numbers come from.
swe-agent
1
result
sota-timeline
1
result
1 of 2 rows marked verified.