Codesota · Models · WizardLM-2-8x22bMicrosoft7 results · 1 benchmarks
Model card

WizardLM-2-8x22b.

Microsoftopen-source
§ 01 · Benchmarks

Every benchmark WizardLM-2-8x22b has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01PLCCNatural Language Processing · Polish Cultural Competencyart-and-entertainment45.0%#100/165source ↗
02PLCCNatural Language Processing · Polish Cultural Competencyhistory67.0%#103/165source ↗
03PLCCNatural Language Processing · Polish Cultural Competencyculture-and-tradition50.0%#110/165source ↗
04PLCCNatural Language Processing · Polish Cultural Competencyaverage51.5%#113/165source ↗
05PLCCNatural Language Processing · Polish Cultural Competencygeography60.0%#116/165source ↗
06PLCCNatural Language Processing · Polish Cultural Competencygrammar49.0%#117/165source ↗
07PLCCNatural Language Processing · Polish Cultural Competencyvocabulary38.0%#120/165source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where WizardLM-2-8x22b actually performs.

Natural Language Processing
1
benchmark
avg rank #111.3
§ 04 · Related models

Other Microsoft models scored on Codesota.

RAD-DINO
2 results · 1 SOTA
NaturalSpeech 3
~500M params · 1 result · 1 SOTA
Swin Transformer V2 Large
197M params · 1 result · 1 SOTA
WavLM Large (SV)
316M params · 1 result · 1 SOTA
ResNet-50
25M params · 3 results
Florence-2-Large
2 results
KOSMOS-2.5
2 results
ResNet-152
60M params · 2 results
§ 05 · Sources & freshness

Where these numbers come from.

sdadas/PLCC
7
results
7 of 7 rows marked verified.