TempCompass is a temporal-understanding video QA benchmark designed to evaluate the temporal perception abilities of Video LLMs. The benchmark exposes fine-grained temporal aspects (e.g., speed and direction) and uses multiple task formats to avoid relying on single-frame or language priors. The authors collect “conflicting” video pairs that share the same static content but differ in a specific temporal aspect, and they use a human annotation + LLM-based instruction generation pipeline to produce diverse task instructions. The public Hugging Face release provides four subsets / task formats (with the uploaded dataset showing a single "test" split): multi-choice (≤1.58k examples), yes/no (≤2.45k examples), captioning (≤2.0k examples), and caption_matching (≤1.5k examples). TempCompass was introduced in the paper “TempCompass: Do Video LLMs Really Understand Videos?” and is intended to benchmark nuanced temporal understanding (examples in the paper include aspects such as speed and direction).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.