UCF101 is a widely-used action recognition benchmark consisting of realistic, unconstrained video clips collected from YouTube. It contains 13,320 video clips across 101 human action categories (e.g., sports, body-motion, human-object interaction, human-human interaction, playing musical instruments). Clips are grouped into 25 groups per action (each group has 4–7 videos) to support cross-group evaluation; the dataset exhibits large variation in camera motion, viewpoint, background clutter, illumination, object scale and appearance. The dataset was introduced as a benchmark for video action recognition with baseline results reported in the original paper (Soomro et al., 2012). Dataset mentioned in DINOv3 evaluations; used for video classification evaluation and reporting top-1 accuracy.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.