Codesota · Computer Vision · Image segmentation · BRAVO (OOD)Tasks/Computer Vision/Image segmentation
Image segmentation · benchmark dataset · EN

BRAVO (BRAVO Semantic Segmentation / BRAVO Challenge dataset).

BRAVO is a benchmark and challenge dataset for evaluating out-of-distribution (OOD) robustness and reliability of semantic segmentation models in urban driving scenes. Created and organized by the BRAVO Challenge (Valeo and UNCV organizers), BRAVO focuses on two reliability aspects: (1) semantic reliability (accuracy and calibration under perturbations) and (2) OOD reliability (detection and handling of unknown out-of-distribution content). The benchmark contains urban-scene images with diverse natural degradations and realistic-looking synthetic corruptions; in the BRAVO challenge setup, models are commonly trained on Cityscapes (or other accepted training sets) and evaluated on BRAVO to measure OOD generalization. The BRAVO code/toolkit and evaluation protocol are available from the BRAVO Challenge repository (valeoai/bravo_challenge), and the challenge and results are described in the ECCV/UNCV BRAVO challenge papers (see arXiv:2409.15107).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies