Model card
Qwen2.5-7B.
Qwenopen-source
§ 01 · Benchmarks
Every benchmark Qwen2.5-7B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | belebele | 85.1% | #97 | — | source ↗ |
| 02 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | average | 53.4% | #114 | — | source ↗ |
| 03 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | dyk | 57.5% | #124 | — | source ↗ |
| 04 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | eq-bench | 26.5% | #160 | — | source ↗ |
| 05 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | ppc | 73.0% | #163 | — | source ↗ |
| 06 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | cbd | 25.6% | #235 | — | source ↗ |
| 07 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polemo2-in | 70.5% | #294 | — | source ↗ |
| 08 | Open PL LLM Leaderboard | Natural Language Processing · Polish LLM General | polqa-open-book | 80.4% | #298 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Qwen2.5-7B actually performs.
§ 04 · Related models
Other Qwen models scored on Codesota.
QwQ-32B
0 results
QwQ-32B-Preview
0 results
Qwen/Qwen2.5-0.5B-Instruct
0.49B params · 0 results
Qwen/Qwen3-14B non-thinking (API)
14B params · 0 results
Qwen/Qwen3-235B-A22B non-thinking (API)
235B params · 0 results
Qwen/Qwen3-30B-A3B non-thinking (API)
30B params · 0 results
Qwen/Qwen3-32B non-thinking (API)
32B params · 0 results
Qwen/Qwen3-8B non-thinking (API)
8B params · 0 results
§ 05 · Sources & freshness
Where these numbers come from.
speakleash/open_pl_llm_leaderboard
8
results
8 of 8 rows marked verified.