A carefully re-annotated version of the MMLU benchmark dataset with 30 subjects and 100 randomly sampled questions per subject (3,000 questions total). MMLU-Redux addresses numerous ground truth errors found in the original MMLU dataset. The analysis revealed that approximately 6.49% of MMLU questions contain errors, with some subjects like Virology containing errors in 57% of questions. This dataset provides more accurate and reliable evaluation of language model capabilities across 57 subjects.
Accuracy is the reported evaluation metric for MMLU-Redux. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.
Higher is better
| Rank | Model | Trust | Score | Year | Source |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | paper | 86.8 | N/A | Source ↗ |