The NYUv2 dataset is used for depth estimation. It is comprised of video sequences from a variety of indoor scenes, recorded by both RGB and Depth cameras from the Microsoft Kinect. It features 1449 densely labeled pairs of aligned RGB and depth images, 464 new scenes from 3 cities, and 407,024 new unlabeled frames. Each object is labeled with a class and an instance number. The dataset has several components: Labeled (a subset of video data with dense multi-class labels, preprocessed to fill missing depth labels), Raw (raw RGB, depth, and accelerometer data from the Kinect), and Toolbox (functions for manipulating the data and labels).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.