BRAVO is a benchmark and challenge dataset for evaluating out-of-distribution (OOD) robustness and reliability of semantic segmentation models in urban driving scenes. Created and organized by the BRAVO Challenge (Valeo and UNCV organizers), BRAVO focuses on two reliability aspects: (1) semantic reliability (accuracy and calibration under perturbations) and (2) OOD reliability (detection and handling of unknown out-of-distribution content). The benchmark contains urban-scene images with diverse natural degradations and realistic-looking synthetic corruptions; in the BRAVO challenge setup, models are commonly trained on Cityscapes (or other accepted training sets) and evaluated on BRAVO to measure OOD generalization. The BRAVO code/toolkit and evaluation protocol are available from the BRAVO Challenge repository (valeoai/bravo_challenge), and the challenge and results are described in the ECCV/UNCV BRAVO challenge papers (see arXiv:2409.15107).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.