A carefully re-annotated version of the MMLU benchmark dataset with 30 subjects and 100 randomly sampled questions per subject (3,000 questions total). MMLU-Redux addresses numerous ground truth errors found in the original MMLU dataset. The analysis revealed that approximately 6.49% of MMLU questions contain errors, with some subjects like Virology containing errors in 57% of questions. This dataset provides more accurate and reliable evaluation of language model capabilities across 57 subjects.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | — | Dec 2024 | Qwen2.5 Technical Report · code | 86.80 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.