Codesota · Computer Vision · Video classification · Kinetics-400Tasks/Computer Vision/Video classification
Video classification · benchmark dataset · EN

Kinetics Human Action Video Dataset (Kinetics-400).

Kinetics-400 (The Kinetics Human Action Video Dataset) is a large-scale human action video classification dataset introduced by Will Kay et al. (DeepMind). It contains 400 human-focused action classes, with at least 400 video clips per class. Each clip is roughly 10 seconds long and is taken from a unique YouTube video; clips are human-annotated with a single action label. The dataset is intended as a benchmark for action recognition / video classification and has been widely used for training and evaluating video classification models. The original dataset paper and release are: "The Kinetics Human Action Video Dataset" (Kay et al., arXiv:1705.06950). The dataset (and community redistributions) are commonly released under Creative Commons Attribution (CC BY 4.0) in the available metadata/releases.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies