Something-Something V2 is a large-scale temporally-sensitive action recognition / video classification dataset of short, trimmed videos of humans interacting with everyday objects. Version 2 contains 220,847 labeled video clips covering 174 fine-grained action classes (defined via caption-templates with placeholders). The data were crowd-sourced (collected from contributors / video sources such as YouTube) and designed to emphasize temporal reasoning (e.g., distinguishing actions that require motion context rather than single-frame cues). It is widely used to benchmark action-recognition / video-classification models and leaderboards report top-1/top-5 accuracies and other standard metrics. (Sources: ICCV 2017 paper by Goyal et al., TwentyBN release notes, and Hugging Face dataset card.)
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.