Dense Depth for Autonomous Driving (DDAD) is a long-range, multi-camera autonomous-driving depth dataset released by the Toyota Research Institute (TRI / TRI-ML). The dataset provides synchronized 6-camera RGB imagery together with LiDAR point clouds, poses, camera intrinsics/extrinsics and additional annotations (2D/3D boxes and semantic labels reported in the public repo/blog). According to the TRI-ML release and accompanying references used by later papers, the training split contains 12,650 samples (≈75,900 images for six cameras) and the validation split contains 3,950 samples (≈15,800 images) with ground-truth dense depth maps used for evaluation (depths evaluated to long range, e.g., up to 200m). DDAD was released via the TRI-ML GitHub (TRI-ML/DDAD) and has been used as an unseen test domain for zero-shot/transfer depth evaluation in several works. No standalone arXiv paper that formally “introduces” DDAD was found; the dataset is distributed via the TRI-ML repository and referenced in workshop/paper supplements.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.