TemporalBench is a multimodal video benchmark for fine-grained temporal understanding and reasoning. Introduced in the paper "TemporalBench: Benchmarking Fine-grained Temporal Understanding for Multimodal Video Models" (arXiv:2410.10818), the benchmark evaluates video–language models on a set of temporally-focused tasks (e.g., short question-answering, multi-binary temporal checks, event ordering, frequency/amplitude reasoning). The dataset provides evaluation splits and task-specific subsets; the subset referenced as "MBA-short QA" (also written in the paper as MBA-short QA) is a short question-answering style subset where performance is reported as multi-binary short-QA accuracy (i.e., multi-binary correctness over short QA items). Project page, code and dataset resources are available from the authors and a hosted dataset entry on Hugging Face.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.