OVOBench is a dataset for evaluating Video Language Models (Video-LLMs) on their ability to understand real-world online video, specifically by finding temporal visual clues from ongoing input and waiting for sufficient evidence before responding. It evaluates capabilities such as backward tracing, real-time visual perception, and forward active responding.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.