Codesota · Benchmark · LVIS (Instance Segmentation)Home/Leaderboards/LVIS (Instance Segmentation)
Unknown

LVIS (Instance Segmentation).

LVIS is a large-scale, high-quality dataset for instance segmentation containing 160k-164k images and 2M instance annotations for over 1000 object categories. It focuses on long-tail object recognition, providing a larger and more detailed vocabulary than COCO. LVIS uses the same images as the COCO dataset but with different splits and annotations optimized for instance segmentation. The dataset includes common and rare object categories and provides standardized evaluation metrics like mean Average Precision (mAP) for instance segmentation.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Only 1 model on this benchmark
Help build the community leaderboard — submit your model results.

MAP

MAP is the reported evaluation metric for LVIS (Instance Segmentation). Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for MAPverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01Segment Anything Model (SAM)
dataset: LVIS (Instance Segmentation); task: 3
paper44.7N/ASource ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Leaderboards