LoveDA (Land-cOVEr Domain Adaptation) is a high-resolution remote-sensing land-cover dataset created for semantic segmentation and domain-adaptive (cross-domain) semantic segmentation research. The dataset contains imagery from three different cities and is explicitly split into two domains (urban vs. rural) to study transferability and unsupervised domain adaptation. According to the authors, LoveDA comprises 5,987 high-spatial-resolution (HSR) images with 166,768 annotated objects covering seven common land-cover categories. The paper provides benchmarks of 11 semantic segmentation methods and 8 unsupervised domain-adaptation (UDA) methods. Code and data are hosted from the authors' GitHub repository (Junjue-Wang/LoveDA).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.