Codesota · Models · DeepSeek-V3.2-SpecialeDeepSeek7 results · 1 benchmarks
Model card

DeepSeek-V3.2-Speciale.

DeepSeekopen-source
§ 01 · Benchmarks

Every benchmark DeepSeek-V3.2-Speciale has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01PLCCNatural Language Processing · Polish Cultural Competencygrammar84.0%#13/165source ↗
02PLCCNatural Language Processing · Polish Cultural Competencygeography94.0%#14/165source ↗
03PLCCNatural Language Processing · Polish Cultural Competencyhistory90.0%#16/165source ↗
04PLCCNatural Language Processing · Polish Cultural Competencyaverage81.0%#29/165source ↗
05PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment71.0%#38/165source ↗
06PLCCNatural Language Processing · Polish Cultural Competencyvocabulary71.0%#42/165source ↗
07PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition76.0%#46/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where DeepSeek-V3.2-Speciale actually performs.

Natural Language Processing
1
benchmark
avg rank #28.3
§ 04 · Related models

Other DeepSeek models scored on Codesota.

DeepSeek R1
671B MoE params · 10 results
DeepSeek-V3
7 results
DeepSeek-Coder-V2-Instruct
Unknown params · 4 results
DeepSeek-OCR
3 results
DeepSeek-R1-0528
3 results
DeepSeek V3.5
685B MoE params · 2 results
DeepSeek-V2.5
2 results
DeepSeek-V3.1
2 results
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
7 of 7 rows marked verified.