ObjectNet is a bias-controlled, real-world out-of-distribution test set for object recognition designed to evaluate robustness of object classification models. By design it controls for common dataset biases (background, rotation, and viewpoint) and was collected via a crowdsourced, highly-automated image-capture and annotation pipeline. The dataset contains roughly 50,000 high-resolution images across 313 object classes (with ~113 classes overlapping ImageNet). ObjectNet is provided primarily as a test set (no paired training set) to measure true generalization; when evaluated, modern object-recognition models showed a large drop in performance (~40–45%) relative to standard benchmarks. The dataset website provides downloads, metadata, and label formats. Sources: NeurIPS 2019 paper (Barbu et al.) and the official ObjectNet site (objectnet.dev).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.