MMMU is a large multimodal benchmark for evaluating multimodal models on college-level, multi-discipline understanding and reasoning. It contains ~11.5K carefully collected multimodal questions from college exams, quizzes, and textbooks spanning 30 subjects and 183 subfields, with 30 heterogeneous image types (e.g., charts, diagrams, maps, tables, music sheets, chemical structures) to test expert-level reasoning across disciplines.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.