GEO-Bench is a curated benchmark suite for Earth-monitoring (geospatial) tasks introduced in Lacoste et al., 2023. The benchmark comprises 12 downstream tasks (six classification and six segmentation tasks) assembled from multiple existing geospatial datasets and adapted to create a standard evaluation protocol for foundation models for Earth observation. The classification “suite” reported in the paper aggregates per-dataset classification tasks and reports mean classification scores across those tasks. GEO-Bench is multimodal in scope (covers optical/RGB, multispectral, SAR and other Earth-observation modalities according to the project resources) and includes code to run evaluations and reproduce results (see the project repository and paper supplement for the detailed list of component datasets and evaluation details). Source: Lacoste et al., “GEO-Bench: Toward Foundation Models for Earth Monitoring” (NeurIPS 2023 / arXiv:2306.03831) and the ServiceNow GEO-Bench GitHub repository.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.