LVBench is a dataset for video language models, specifically designed for extreme long video understanding. It is a benchmark that leverages ring attention to push context lengths to the million-token scale, building upon models like LWM, and utilizes feature pooling strategies similar to PLLaVA for adapting image-language pre-trained models to dense video understanding.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.