Codesota · Models · DeepSeek V3.5DeepSeek2 results · 1 benchmarks
Model card

DeepSeek V3.5.

DeepSeekopen-source685B MoE params
§ 01 · Benchmarks

Every benchmark DeepSeek V3.5 has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01SWE-BenchComputer Code · Code Generationresolve-rate-agentic74.6%#14/252025-11-01unverified
02SWE-BenchComputer Code · Code Generationresolve-rate74.6%#16/322025-11-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where DeepSeek V3.5 actually performs.

Computer Code
1
benchmark
avg rank #15.0
§ 03 · Papers

1 paper with results for DeepSeek V3.5.

  1. 2023-10-10· Computer Code· 1 result

    SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

    Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao et al.
§ 04 · Related models

Other DeepSeek models scored on Codesota.

DeepSeek R1
671B MoE params · 10 results
DeepSeek-V3
7 results
DeepSeek-Coder-V2-Instruct
Unknown params · 4 results
DeepSeek-OCR
3 results
DeepSeek-R1-0528
3 results
DeepSeek-V2.5
2 results
DeepSeek-V3.1
2 results
DeepSeek V3.2
1 result
§ 05 · Sources & freshness

Where these numbers come from.

deepseek-agent
1
result
swebench-leaderboard
1
result
1 of 2 rows marked verified.