Codesota · Computer Vision · Image Classification · GEO-Bench (classification suite)Tasks/Computer Vision/Image Classification
Image Classification · benchmark dataset · EN

GEO-Bench: Toward Foundation Models for Earth Monitoring.

GEO-Bench is a curated benchmark suite for Earth-monitoring (geospatial) tasks introduced in Lacoste et al., 2023. The benchmark comprises 12 downstream tasks (six classification and six segmentation tasks) assembled from multiple existing geospatial datasets and adapted to create a standard evaluation protocol for foundation models for Earth observation. The classification “suite” reported in the paper aggregates per-dataset classification tasks and reports mean classification scores across those tasks. GEO-Bench is multimodal in scope (covers optical/RGB, multispectral, SAR and other Earth-observation modalities according to the project resources) and includes code to run evaluations and reproduce results (see the project repository and paper supplement for the detailed list of component datasets and evaluation details). Source: Lacoste et al., “GEO-Bench: Toward Foundation Models for Earth Monitoring” (NeurIPS 2023 / arXiv:2306.03831) and the ServiceNow GEO-Bench GitHub repository.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
GEO-Bench (classification suite) — Image Classification benchmark · Codesota | CodeSOTA