PLM-VideoBench is a human-annotated video evaluation suite introduced in the PerceptionLM paper (arXiv:2504.13180). It is designed to test detailed video understanding and reasoning about “what”, “where”, “when” and “how” in video content. The benchmark contains multiple task-specific subsets: FGQA (fine-grained multiple-choice QA), SGQA (smart-glasses open-ended QA), RCap (video region captioning), RTLoc (region temporal localization), and RDCap (region dense video captioning). The PerceptionLM paper states the full PLM release includes 2.8M human-labeled instances across video QA and spatio-temporal captioning; the paper reports test-set sizes of FGQA ~4.3K, SGQA ~665, RCap ~10.06K, RTLoc ~7.91K and RDCap ~2.62K. Evaluation metrics used in the paper include MBAcc for FGQA, LLM-judge accuracy for SGQA and RCap, SODA for RDCap, and mean Recall@1 (averaged over IoU thresholds) for RTLoc. The Hugging Face dataset page (facebook/PLM-VideoBench) provides downloadable parquet subsets and metadata; the HF page lists subset row counts (for example: fgqa ~11k rows, rcap ~14.7k rows, rdcap ~5.17k rows, rtloc ~12.5k rows, sgqa 665 rows) which reflect the distributed dataset files on the hub. License: CC BY 4.0. Modalities: video + text (QA/captions/temporal spans).
4 results indexed across 4 metrics. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | PLM (8B) | — | Apr 2025 | PerceptionLM: Open-Access Data and Models for Detailed V… · code | 46.60 |
| # | Model | Org | Submitted | Paper / code | MBAcc |
|---|---|---|---|---|---|
| 01 | PLM (8B) | — | Apr 2025 | PerceptionLM: Open-Access Data and Models for Detailed V… · code | 67.70 |
| # | Model | Org | Submitted | Paper / code | Mean Recall@1 |
|---|---|---|---|---|---|
| 01 | PLM (8B) | — | Apr 2025 | PerceptionLM: Open-Access Data and Models for Detailed V… · code | 59.10 |
| # | Model | Org | Submitted | Paper / code | SODA |
|---|---|---|---|---|---|
| 01 | PLM (8B) | — | Apr 2025 | PerceptionLM: Open-Access Data and Models for Detailed V… · code | 52.80 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.