Video-MME is a comprehensive evaluation benchmark for multi-modal large language models (MLLMs) in video analysis. It evaluates MLLMs on video understanding tasks using 900 newly collected and human-annotated videos, including those with subtitles and audio. The dataset covers a full spectrum of video lengths, various video types across 6 key domains and 30 sub-class video types, and integrates multi-modal inputs like subtitles and audio to assess all-round MLLM capabilities.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.