Codesota · General · Vision-Language Models · MMMUTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

MMMU.

MMMU is a large multimodal benchmark for evaluating multimodal models on college-level, multi-discipline understanding and reasoning. It contains ~11.5K carefully collected multimodal questions from college exams, quizzes, and textbooks spanning 30 subjects and 183 subfields, with 30 heterogeneous image types (e.g., charts, diagrams, maps, tables, music sheets, chemical structures) to test expert-level reasoning across disciplines.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies