Codesota · General · Video-Language Models · MVPTasks/General/Video-Language Models
Video-Language Models · benchmark dataset · EN

Minimal Video Pairs (MVP).

Minimal Video Pairs (MVP) is a shortcut-aware Video Question Answering (Video-QA) benchmark designed to evaluate spatio-temporal and intuitive-physics understanding of video-language models. The benchmark is constructed from minimally different video pairs such that videos in each pair differ in only small ways but produce opposite correct answers to the same question; this design reduces reliance on superficial visual or textual shortcuts. The dataset contains multiple-choice QA examples (reported as ~55K examples in the paper) curated from nine video sources spanning egocentric/first-person and third-person domains. It is organized into thematic subsets (e.g., human_object_interactions, intuitive_physics, robot_object_interactions, temporal_reasoning) and provides scripts to download underlying videos (videos not hosted directly on Hugging Face for legal reasons). The primary evaluation metric used is paired accuracy (paired accuracy over minimal video pairs).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies