Hypersim is a photorealistic synthetic dataset for holistic indoor scene understanding (introduced by Roberts et al.). It contains 77,400 rendered images of 461 indoor scenes and provides dense per-pixel ground-truth annotations and complete scene information useful for tasks such as depth prediction, surface normals, semantic/instance segmentation, intrinsic decomposition (diffuse reflectance, illumination, non-diffuse residual), full scene geometry, material properties, and camera parameters. The dataset was created from a large repository of professionally authored 3D assets and renderings; the project provides code and data on GitHub and the paper was published at ICCV 2021. Synthetic indoor dataset used in zero-shot metric depth evaluation (reported in Table 5 of the paper).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.