Codesota · Models · Llama-PLLuM-8B-chatPLLuM16 results · 2 benchmarks
Model card

Llama-PLLuM-8B-chat.

PLLuMopen-source
§ 01 · Benchmarks

Every benchmark Llama-PLLuM-8B-chat has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01Polish MT-BenchNatural Language Processing · Polish Conversation Qualityhumanities9.5%#15/50source ↗
02Polish MT-BenchNatural Language Processing · Polish Conversation Qualityreasoning5.3%#26/50source ↗
03Polish MT-BenchNatural Language Processing · Polish Conversation Qualitypl-score6.0%#32/50source ↗
04Polish MT-BenchNatural Language Processing · Polish Conversation Qualitywriting7.2%#34/50source ↗
05Polish MT-BenchNatural Language Processing · Polish Conversation Qualitystem7.5%#34/50source ↗
06Polish MT-BenchNatural Language Processing · Polish Conversation Qualityextraction6.3%#38/50source ↗
07Polish MT-BenchNatural Language Processing · Polish Conversation Qualityroleplay6.2%#38/50source ↗
08Polish MT-BenchNatural Language Processing · Polish Conversation Qualitycoding3.6%#41/50source ↗
09Polish MT-BenchNatural Language Processing · Polish Conversation Qualitymath2.8%#42/50source ↗
10PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment33.0%#129/165source ↗
11PLCCNatural Language Processing · Polish Cultural Competencyvocabulary35.0%#135/165source ↗
12PLCCNatural Language Processing · Polish Cultural Competencygeography46.0%#135/165source ↗
13PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition34.0%#138/165source ↗
14PLCCNatural Language Processing · Polish Cultural Competencyhistory50.0%#139/165source ↗
15PLCCNatural Language Processing · Polish Cultural Competencyaverage38.5%#142/165source ↗
16PLCCNatural Language Processing · Polish Cultural Competencygrammar33.0%#154/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where Llama-PLLuM-8B-chat actually performs.

Natural Language Processing
2
benchmarks
avg rank #79.5
§ 04 · Related models

Other PLLuM models scored on Codesota.

Llama-PLLuM-70B-chat
0 results
Llama-PLLuM-70B-chat-250801
0 results
PLLuM-12B-chat
0 results
PLLuM-12B-nc-chat
0 results
PLLuM-12B-nc-chat-250715
0 results
PLLuM-8x7B-chat
0 results
PLLuM-8x7B-nc-chat
0 results
§ 05 · Sources & freshness

Where these numbers come from.

SpeakLeash/MT-Bench-PL
9
results
sdadas/PLCC
7
results
16 of 16 rows marked verified.