Codesota · Computer Vision · Video classification · Epic-Kitchens-100 (EK100)Tasks/Computer Vision/Video classification
Video classification · benchmark dataset · EN

EPIC-KITCHENS-100 (EK100).

EPIC-KITCHENS-100 (EK100) is a large-scale egocentric (first-person) video dataset of daily activities in kitchens, released as an extended version of the original EPIC-KITCHENS collection. It contains ~100 hours of head-mounted camera footage captured in 45 kitchens across multiple cities, with dense audio-visual narrations and manual annotations collected via a “pause-and-talk” narration interface. Key statistics: ~100 hours of Full HD video (~20M frames), ~90K action segments, ~20K narrations, 97 verb classes and ~300 noun classes. The dataset supports multiple challenges/tasks including action recognition (full and weak supervision), action detection, action anticipation (commonly used as a benchmark for action anticipation where metrics such as mean-class recall@5 for verb, noun and joint action are reported on the validation set), cross-modal retrieval and unsupervised domain adaptation. Official resources include the dataset website, annotations GitHub repo and the dataset paper (arXiv:2006.13256).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
Epic-Kitchens-100 (EK100) — Video classification benchmark · Codesota | CodeSOTA