Codesota · Computer Vision · Depth estimation · NYUv2 (metric)Tasks/Computer Vision/Depth estimation
Depth estimation · benchmark dataset · EN

NYU Depth Dataset V2 (Metric Depth).

The NYUv2 dataset is used for depth estimation. It is comprised of video sequences from a variety of indoor scenes, recorded by both RGB and Depth cameras from the Microsoft Kinect. It features 1449 densely labeled pairs of aligned RGB and depth images, 464 new scenes from 3 cities, and 407,024 new unlabeled frames. Each object is labeled with a class and an instance number. The dataset has several components: Labeled (a subset of video data with dense multi-class labels, preprocessed to fill missing depth labels), Raw (raw RGB, depth, and accelerometer data from the Kinect), and Toolbox (functions for manipulating the data and labels).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies