MMWorld is a comprehensive benchmark for evaluating multi-discipline multi-faceted world model evaluation in videos. It provides a curated collection of videos across multiple disciplines with questions that test various aspects of video understanding, including visual perception, domain knowledge, and reasoning capabilities. The dataset includes videos from different domains with structured questions and answers to evaluate Large Multimodal Models on their ability to understand and reason about video content.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.