MegaDepth is a large-scale dataset for single-view depth prediction constructed from Internet photo collections using structure-from-motion (SfM) and multi-view stereo (MVS). The authors use COLMAP reconstructions to produce dense depth maps and masks for images of many outdoor landmark scenes; the published dataset comprises reconstructions for ≈196 distinct locations (landmarks) and tens to hundreds of images per scene, together with cleaned, scale-normalized dense depth maps and validity masks suitable for training and evaluating single-view depth and related 3D tasks. The dataset was introduced in: Li & Snavely, "MegaDepth: Learning Single-View Depth Prediction from Internet Photos" (CVPR 2018 / arXiv:1804.00607). Note: in the evaluation context referenced in your paper, the authors (of that paper) use a 19-scene subset of MegaDepth (referred to as “MegaDepth (19)”), specifically scenes indexed 5000–5018, as an out-of-domain novel-view-synthesis (NVS) / evaluation split. This subset is not a separate release of MegaDepth but a selection of 19 scenes from the full MegaDepth reconstructions used for out-of-domain NVS testing.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.