Codesota · Computer Vision · Depth estimation · DDAD (relative)Tasks/Computer Vision/Depth estimation
Depth estimation · benchmark dataset · EN

Dense Depth for Autonomous Driving (DDAD).

Dense Depth for Autonomous Driving (DDAD) is a long-range, multi-camera autonomous-driving depth dataset released by the Toyota Research Institute (TRI / TRI-ML). The dataset provides synchronized 6-camera RGB imagery together with LiDAR point clouds, poses, camera intrinsics/extrinsics and additional annotations (2D/3D boxes and semantic labels reported in the public repo/blog). According to the TRI-ML release and accompanying references used by later papers, the training split contains 12,650 samples (≈75,900 images for six cameras) and the validation split contains 3,950 samples (≈15,800 images) with ground-truth dense depth maps used for evaluation (depths evaluated to long range, e.g., up to 200m). DDAD was released via the TRI-ML GitHub (TRI-ML/DDAD) and has been used as an unseen test domain for zero-shot/transfer depth evaluation in several works. No standalone arXiv paper that formally “introduces” DDAD was found; the dataset is distributed via the TRI-ML repository and referenced in workshop/paper supplements.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
DDAD (relative) — Depth estimation benchmark · Codesota | CodeSOTA