ImageNet-Hard is a robustness benchmark of "hard" ImageNet-scale examples curated to challenge modern vision models. It contains ~10.98k images gathered from multiple ImageNet variants and related benchmarks (ImageNet, ImageNet-V2, ImageNet-Sketch, ImageNet-C, ImageNet-R, ImageNet-ReaL, ImageNet-A, and ObjectNet). The set was created in the NeurIPS 2023 Datasets & Benchmarks work “ImageNet-Hard: The Hardest Images Remaining from a Study of the Power of Zoom and Spatial Biases in Image Classification” to capture the hardest remaining examples after studying model behavior under zoom and spatial-bias interventions. Images are provided with class labels and a metadata/origin field indicating their source dataset. The benchmark is intended for evaluating classification robustness and OOD / hard-example performance; the Hugging Face dataset card lists task category image-classification, MIT license, and size category ~10K–100K images.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.