Codesota · Benchmark · MMLU-ReduxHome/Leaderboards/Language & Knowledge/Language Modeling/MMLU-Redux
Unknown

MMLU-Redux.

A carefully re-annotated version of the MMLU benchmark dataset with 30 subjects and 100 randomly sampled questions per subject (3,000 questions total). MMLU-Redux addresses numerous ground truth errors found in the original MMLU dataset. The analysis revealed that approximately 6.49% of MMLU questions contain errors, with some subjects like Virology containing errors in 57% of questions. This dataset provides more accurate and reliable evaluation of language model capabilities across 57 subjects.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Only 1 model on this benchmark
Help build the community leaderboard — submit your model results.

Accuracy

Accuracy is the reported evaluation metric for MMLU-Redux. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for Accuracyverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01Qwen2.5-72B-Instruct
dataset: MMLU-Redux; task: 5
paper86.8N/ASource ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Language Modeling