ImageNet Large Scale Visual Recognition Challenge (ILSVRC): the standard 1,000-class image classification benchmark. Sparked the deep learning revolution from 2010 onward.
15 results indexed across 2 metrics. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | top-1-accuracy |
|---|---|---|---|---|---|
| 01 | CoCa (ViT-G/14)OSS | May 2022 | CoCa: Contrastive Captioners are Image-Text Foundation M… | 91 | |
| 02 | SoViT-400M/14OSS | May 2023 | codesota-editorial | 90.30 | |
| 03 | EVA-02 (ViT-L/14+)OSS | BAAI | Mar 2023 | EVA-02: A Visual Representation Powerhouse for Dense Rec… | 90 |
| 04 | ViT-22B/14OSS | Feb 2023 | codesota-editorial | 89.51 | |
| 05 | InternViT-6B (InternVL)OSS | OpenGVLab | Jun 2024 | InternVL: Scaling up Vision Foundation Models | 88.20 |
| 06 | maxvit_base_tf_512.in1kOSS | Apr 2023 | codesota-editorial | 86.60 | |
| 07 | coatnet_2_rw_224.sw_in12k_ft_in1kOSS | Sep 2022 | codesota-editorial | 86.58 | |
| 08 | nextvit_large.bd_ssld_6m_in1k_384OSS | ByteDance | Nov 2022 | codesota-editorial | 86.54 |
| 09 | swin_large.ms_in22k_ft_in1kOSS | Microsoft | Mar 2021 | Swin Transformer: Hierarchical Vision Transformer using … | 86.33 |
| 10 | convnext_base.fb_in22k_ft_in1kOSS | Meta AI | Jan 2022 | codesota-editorial | 86.30 |
| # | Model | Org | Submitted | Paper / code | top-5-accuracy |
|---|---|---|---|---|---|
| 01 | SENetOSS | Momenta | Jan 2017 | codesota-editorial | 97.75 |
| 02 | ResNet-152OSS | Microsoft | Jan 2015 | codesota-editorial | 96.43 |
| 03 | GoogLeNetOSS | Jan 2014 | codesota-editorial | 93.30 | |
| 04 | AlexNetOSS | U. Toronto | Jan 2012 | codesota-editorial | 83.60 |
| 05 | NEC-UIUCOSS | NEC / UIUC | Jan 2010 | codesota-editorial | 71.80 |
Each row below marks a model that broke the previous record on top-1-accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.