Codesota · Models · mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8)mistralai5 results · 1 benchmarks
Model card
mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8).
mistralaiopen-source24B params
§ 01 · Benchmarks
Every benchmark mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | phraseology | 4.0% | #11 | — | source ↗ |
| 02 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | language-understanding | 4.0% | #14 | — | source ↗ |
| 03 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | average | 3.8% | #19 | — | source ↗ |
| 04 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | sentiment | 4.0% | #26 | — | source ↗ |
| 05 | CPTU-Bench | Natural Language Processing · Polish Text Understanding | tricky-questions | 3.3% | #33 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where mistralai/Mistral-Small-3.2-24B-Instruct-2506 (API FP8) actually performs.
§ 04 · Related models
Other mistralai models scored on Codesota.
Ministral-8B-Instruct-2410
0 results
Mistral-7B-Instruct-v0.1
0 results
Mistral-7B-Instruct-v0.2
0 results
Mistral-7B-Instruct-v0.3
7.25B params · 0 results
Mistral-7B-v0.3
0 results
Mistral-Large-Instruct-2407
123B params · 0 results
Mistral-Large-Instruct-2411
123B params · 0 results
Mistral-Nemo-Base-2407
0 results
§ 05 · Sources & freshness
Where these numbers come from.
SpeakLeash/CPTU-Bench
5
results
5 of 5 rows marked verified.