The Met (The Met dataset) is a large-scale instance-level recognition dataset built from the Metropolitan Museum of Art Open Access collection. The training set contains ~400k images covering more than 224k classes (each museum exhibit is treated as a distinct class), producing a long-tail / many-singleton distribution that resembles few-shot scenarios. The authors collected ground-truth for the query set from museum visitors (≈1,100 query images) to form the Met queries; additionally a set of out-of-distribution distractor queries is provided (images not related to The Met). Evaluation protocols used include average classification accuracy (ACC) on the Met queries and Global Average Precision (GAP). The dataset was introduced to support instance-level recognition and retrieval research in the artwork domain and to benchmark recognition under distribution shift between studio-like catalog images and in-the-wild visitor photos.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.