LVIS is a large-scale, high-quality dataset for object detection containing 160k-164k images and 2M instance annotations for over 1000 object categories. It focuses on long-tail object recognition, providing a larger and more detailed vocabulary than COCO. LVIS uses the same images as the COCO dataset but with different splits and annotations optimized for object detection. The dataset includes common and rare object categories and provides standardized evaluation metrics like mean Average Precision (mAP) for object detection.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.