Codesota · Computer Vision · Image Classification · ObjectNetTasks/Computer Vision/Image Classification
Image Classification · benchmark dataset · EN

ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models.

ObjectNet is a bias-controlled, real-world out-of-distribution test set for object recognition designed to evaluate robustness of object classification models. By design it controls for common dataset biases (background, rotation, and viewpoint) and was collected via a crowdsourced, highly-automated image-capture and annotation pipeline. The dataset contains roughly 50,000 high-resolution images across 313 object classes (with ~113 classes overlapping ImageNet). ObjectNet is provided primarily as a test set (no paired training set) to measure true generalization; when evaluated, modern object-recognition models showed a large drop in performance (~40–45%) relative to standard benchmarks. The dataset website provides downloads, metadata, and label formats. Sources: NeurIPS 2019 paper (Barbu et al.) and the official ObjectNet site (objectnet.dev).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
ObjectNet — Image Classification benchmark · Codesota | CodeSOTA