Codesota · General · Video-Language Models · Video-MMLUTasks/General/Video-Language Models
Video-Language Models · benchmark dataset · 2025 · ENGLISH

Video-MMLU: A Massive Multi-Discipline Lecture Understanding Benchmark.

Video-MMLU is a benchmark designed to rigorously evaluate how well Large Multimodal Models perform on Massive Multi-discipline Lecture Understanding. It specifically focuses on testing whether models can truly understand and reason about knowledge-intensive lecture videos – like those demonstrating theorems or solving problems in fields such as math, physics, and chemistry, including their dynamic formulas and animations – requiring them to integrate visual and temporal information and grasp the reasoning behind them, much like a human student would.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
Video-MMLU — Video-Language Models benchmark · Codesota | CodeSOTA