Video-MMLU is a benchmark designed to rigorously evaluate how well Large Multimodal Models perform on Massive Multi-discipline Lecture Understanding. It specifically focuses on testing whether models can truly understand and reason about knowledge-intensive lecture videos – like those demonstrating theorems or solving problems in fields such as math, physics, and chemistry, including their dynamic formulas and animations – requiring them to integrate visual and temporal information and grasp the reasoning behind them, much like a human student would.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.