Codesota · Computer Vision · Image segmentation · LoveDATasks/Computer Vision/Image segmentation
Image segmentation · benchmark dataset · EN

Land-cOVEr Domain Adaptation (LoveDA).

LoveDA (Land-cOVEr Domain Adaptation) is a high-resolution remote-sensing land-cover dataset created for semantic segmentation and domain-adaptive (cross-domain) semantic segmentation research. The dataset contains imagery from three different cities and is explicitly split into two domains (urban vs. rural) to study transferability and unsupervised domain adaptation. According to the authors, LoveDA comprises 5,987 high-spatial-resolution (HSR) images with 166,768 annotated objects covering seven common land-cover categories. The paper provides benchmarks of 11 semantic segmentation methods and 8 unsupervised domain-adaptation (UDA) methods. Code and data are hosted from the authors' GitHub repository (Junjue-Wang/LoveDA).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies