A translated / multilingual version of the MMLU (Measuring Massive Multitask Language Understanding) benchmark adapted for multilingual evaluation. MMLU is a 57-task, multiple-choice benchmark covering subjects across humanities, social sciences, and STEM requiring broad world knowledge and problem-solving. The "okapi MMLU (translated)" assets on Hugging Face provide MMLU questions and answers translated into multiple languages (examples on HF include many languages such as id, vi, ar, bn, de, es, fr, etc.). The translated MMLU variants are commonly used for multilingual few-shot evaluation (the Okapi paper reports using translated MMLU in 5-shot evaluations). License on the HF repos is listed as CC-BY-NC-4.0. Source references: the original MMLU paper (Hendrycks et al., arXiv:2009.03300) and the Okapi project (Okapi: instruction-tuned LLMs; arXiv:2307.16039) and the Hugging Face dataset pages (e.g., jon-tow/okapi_mmlu and SEACrowd/okapi_m_mmlu).
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | — | Dec 2024 | Qwen2.5 Technical Report · code | 79.97 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.