MMLU (Measuring Massive Multitask Language Understanding) is a popular benchmark used to evaluate the capabilities of large language models. It is a multidisciplinary multiple-choice collection that has inspired other versions and spin-offs.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.