EPIC-KITCHENS-100 (EK100) is a large-scale egocentric (first-person) video dataset of daily activities in kitchens, released as an extended version of the original EPIC-KITCHENS collection. It contains ~100 hours of head-mounted camera footage captured in 45 kitchens across multiple cities, with dense audio-visual narrations and manual annotations collected via a “pause-and-talk” narration interface. Key statistics: ~100 hours of Full HD video (~20M frames), ~90K action segments, ~20K narrations, 97 verb classes and ~300 noun classes. The dataset supports multiple challenges/tasks including action recognition (full and weak supervision), action detection, action anticipation (commonly used as a benchmark for action anticipation where metrics such as mean-class recall@5 for verb, noun and joint action are reported on the validation set), cross-modal retrieval and unsupervised domain adaptation. Official resources include the dataset website, annotations GitHub repo and the dataset paper (arXiv:2006.13256).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.