LongVideoBench is a benchmark for long-context interleaved video-language understanding, addressing a gap in existing benchmarks for long video understanding. It proposes a new referring reasoning task to evaluate the abilities of large multimodal models (LMMs) and aims to address the single-frame bias problem in video understanding metrics. The benchmark covers a wide range of video lengths (up to one hour) and themes, with diverse question types and high-quality, manually annotated data. It is used to comprehensively evaluate proprietary and open-source models to understand their long-context multimodal modeling capabilities.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.