TAP-Vid is a benchmark for the Tracking Any Point (TAP) problem: given a video and a set of query 2D points, the task is to track those physical/image points through time. Introduced by DeepMind et al., TAP-Vid includes both real-world videos with accurate human-annotated 2D point tracks and synthetic videos with dense ground-truth trajectories, enabling evaluation of long-range, deformable and occluded point motion. The benchmark provides standardized evaluation splits and metrics used in follow-up work (examples reported in the literature include AJ, delta_avg^vis and OA), and is widely used to evaluate point/pixel-level tracking methods (TAP models, TAPNet/TAPTR, CoTracker, etc.).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.