Kinetics-400 (The Kinetics Human Action Video Dataset) is a large-scale human action video classification dataset introduced by Will Kay et al. (DeepMind). It contains 400 human-focused action classes, with at least 400 video clips per class. Each clip is roughly 10 seconds long and is taken from a unique YouTube video; clips are human-annotated with a single action label. The dataset is intended as a benchmark for action recognition / video classification and has been widely used for training and evaluating video classification models. The original dataset paper and release are: "The Kinetics Human Action Video Dataset" (Kay et al., arXiv:1705.06950). The dataset (and community redistributions) are commonly released under Creative Commons Attribution (CC BY 4.0) in the available metadata/releases.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.