Codesota · Natural Language Processing · Language Modeling · Global MMLU-LiteTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

Global-MMLU-Lite.

Global-MMLU-Lite is a compact multilingual evaluation subset of the Global-MMLU benchmark. The Lite version covers 16 languages (a subset of the full 42-language Global-MMLU) and contains human-translated / post-edited MMLU-style multiple-choice questions. For each included language, the dataset provides 200 Culturally Sensitive (CS) and 200 Culturally Agnostic (CA) samples (i.e., 400 examples per language). The Lite split selects languages from Global-MMLU that were fully human-translated or post-edited, enabling a smaller, reproducible evaluation set for multilingual model comparisons. License: Apache-2.0. (Source: Hugging Face dataset card for CohereLabs/Global-MMLU-Lite and the Global MMLU paper arXiv:2412.03304 / ACL 2025.)

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies