Model card
Mistral-Small-24B-Instruct-2501.
Mistralopen-source
§ 01 · Benchmarks
Every benchmark Mistral-Small-24B-Instruct-2501 has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | extraction | 9.9% | #1 | — | source ↗ |
| 02 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | math | 7.8% | #4 | — | source ↗ |
| 03 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | coding | 8.0% | #4 | — | source ↗ |
| 04 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | pl-score | 8.7% | #7 | — | source ↗ |
| 05 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | reasoning | 7.9% | #9 | — | source ↗ |
| 06 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | roleplay | 9.1% | #10 | — | source ↗ |
| 07 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | humanities | 9.7% | #11 | — | source ↗ |
| 08 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | stem | 9.5% | #14 | — | source ↗ |
| 09 | Polish MT-Bench | Natural Language Processing · Polish Conversation Quality | writing | 8.0% | #24 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Mistral-Small-24B-Instruct-2501 actually performs.
§ 04 · Related models
Other Mistral models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
SpeakLeash/MT-Bench-PL
9
results
9 of 9 rows marked verified.