The Microsoft COCO 2017 Instance Segmentation dataset (COCO 2017) is a large-scale benchmark for object detection and instance segmentation. It provides images with per-instance segmentation annotations (polygon masks and RLE), bounding boxes, and category labels for commonly occurring object classes (the standard COCO set of 80 detection/segmentation categories). The 2017 split commonly used for benchmarking includes train2017 and val2017 (HF mirrors list ~118,287 training images and 5,000 validation images) and test splits; annotations are provided in COCO JSON format. COCO was introduced in Lin et al., "Microsoft COCO: Common Objects in Context" (arXiv:1405.0312 / ECCV 2014) and is widely used for evaluating instance segmentation, object detection, and related tasks.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | mAP |
|---|---|---|---|---|---|
| 01 | Segment Anything Model (SAM) | — | Apr 2023 | Segment Anything · code | 46.50 |
Each row below marks a model that broke the previous record on mAP. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.