Model card
Gemma-2-27b.
Googleopen-source
§ 01 · Benchmarks
Every benchmark Gemma-2-27b has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | PLCC | Natural Language Processing · Polish Cultural Competency | culture-and-tradition | 41.0% | #123 | — | source ↗ |
| 02 | PLCC | Natural Language Processing · Polish Cultural Competency | vocabulary | 37.0% | #125 | — | source ↗ |
| 03 | PLCC | Natural Language Processing · Polish Cultural Competency | grammar | 46.0% | #126 | — | source ↗ |
| 04 | PLCC | Natural Language Processing · Polish Cultural Competency | average | 42.7% | #132 | — | source ↗ |
| 05 | PLCC | Natural Language Processing · Polish Cultural Competency | art-and-entertainment | 32.0% | #132 | — | source ↗ |
| 06 | PLCC | Natural Language Processing · Polish Cultural Competency | geography | 47.0% | #134 | — | source ↗ |
| 07 | PLCC | Natural Language Processing · Polish Cultural Competency | history | 53.0% | #134 | — | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where Gemma-2-27b actually performs.
§ 04 · Related models
Other Google models scored on Codesota.
Gemini 2.5 Pro
16 results · 3 SOTA
Gemini 3 Pro
Undisclosed params · 13 results · 2 SOTA
Gemini 1.5 Pro
12 results · 1 SOTA
Gemini 3.1 Pro
3 results · 1 SOTA
ViT-H/14
632M params · 2 results · 1 SOTA
CoCa (finetuned)
2.1B params · 1 result · 1 SOTA
Gemini 2.0 Flash
1 result · 1 SOTA
Gemini 3.1 Pro Preview
1 result · 1 SOTA
§ 05 · Sources & freshness
Where these numbers come from.
sdadas/PLCC
7
results
7 of 7 rows marked verified.