Codesota · Models · Gemma-2-9bGoogle7 results · 1 benchmarks
Model card

Gemma-2-9b.

Googleopen-source
§ 01 · Benchmarks

Every benchmark Gemma-2-9b has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01PLCCNatural Language Processing · Polish Cultural Competencygrammar38.0%#147/165source ↗
02PLCCNatural Language Processing · Polish Cultural Competencyvocabulary30.0%#148/165source ↗
03PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition23.0%#151/165source ↗
04PLCCNatural Language Processing · Polish Cultural Competencyaverage29.2%#152/165source ↗
05PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment19.0%#152/165source ↗
06PLCCNatural Language Processing · Polish Cultural Competencygeography30.0%#153/165source ↗
07PLCCNatural Language Processing · Polish Cultural Competencyhistory35.0%#156/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Gemma-2-9b actually performs.

Natural Language Processing
1
benchmark
avg rank #151.3
§ 04 · Related models

Other Google models scored on Codesota.

Gemini 2.5 Pro
16 results · 3 SOTA
Gemini 3 Pro
Undisclosed params · 13 results · 2 SOTA
Gemini 1.5 Pro
12 results · 1 SOTA
Gemini 3.1 Pro
3 results · 1 SOTA
ViT-H/14
632M params · 2 results · 1 SOTA
CoCa (finetuned)
2.1B params · 1 result · 1 SOTA
Gemini 2.0 Flash
1 result · 1 SOTA
Gemini 3.1 Pro Preview
1 result · 1 SOTA
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
7 of 7 rows marked verified.