Codesota · Models · Llama-PLLuM-70B-instructCYFRAGOVPL6 results · 2 benchmarks
Model card

Llama-PLLuM-70B-instruct.

CYFRAGOVPLopen-source70.6B params
§ 01 · Benchmarks

Every benchmark Llama-PLLuM-70B-instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish EQ-BenchNatural Language Processing · Polish Emotional Intelligenceeq-score70.0%#20/101source ↗
02CPTU-BenchNatural Language Processing · Polish Text Understandinglanguage-understanding3.6%#40/93source ↗
03CPTU-BenchNatural Language Processing · Polish Text Understandingsentiment3.8%#43/93source ↗
04CPTU-BenchNatural Language Processing · Polish Text Understandingaverage3.3%#45/93source ↗
05CPTU-BenchNatural Language Processing · Polish Text Understandingphraseology3.3%#47/93source ↗
06CPTU-BenchNatural Language Processing · Polish Text Understandingtricky-questions2.6%#50/93source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama-PLLuM-70B-instruct actually performs.

Natural Language Processing
2
benchmarks
avg rank #40.8
§ 04 · Related models

Other CYFRAGOVPL models scored on Codesota.

CYFRAGOVPL/Llama-PLLuM-8B-instruct
8.03B params · 0 results
CYFRAGOVPL/PLLuM-12B-nc-chat
12.2B params · 0 results
CYFRAGOVPL/PLLuM-12B-nc-instruct
12.2B params · 0 results
CYFRAGOVPL/pllum-12b-nc-instruct-250715
12.2B params · 0 results
Llama-PLLuM-70B-chat
70.6B params · 0 results
Llama-PLLuM-8B-chat
8.03B params · 0 results
PLLuM-12B-chat
12.2B params · 0 results
PLLuM-12B-instruct
12.2B params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/CPTU-Bench
5
results
SpeakLeash/Polish-EQ-Bench
1
result
6 of 6 rows marked verified.