Codesota · Computer Vision · Video classification · UCF101Tasks/Computer Vision/Video classification
Video classification · benchmark dataset · EN

UCF101: A Dataset of 101 Human Action Classes From Videos in the Wild.

UCF101 is a widely-used action recognition benchmark consisting of realistic, unconstrained video clips collected from YouTube. It contains 13,320 video clips across 101 human action categories (e.g., sports, body-motion, human-object interaction, human-human interaction, playing musical instruments). Clips are grouped into 25 groups per action (each group has 4–7 videos) to support cross-group evaluation; the dataset exhibits large variation in camera motion, viewpoint, background clutter, illumination, object scale and appearance. The dataset was introduced as a benchmark for video action recognition with baseline results reported in the original paper (Soomro et al., 2012). Dataset mentioned in DINOv3 evaluations; used for video classification evaluation and reporting top-1 accuracy.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
UCF101 — Video classification benchmark · Codesota | CodeSOTA