CIFAR-10

Unknown

60K 32x32 color images in 10 classes. Classic small-scale image classification benchmark with 50K training and 10K test images.

Benchmark Stats

Models11
Papers11
Metrics1

SOTA History

accuracy

accuracy

Higher is better

RankModelSourceScoreYearPaper
1ViT-H/14 (JFT-300M)

Vision Transformer ViT-H/14, pre-trained on JFT-300M and fine-tuned on CIFAR-10. 99.50 ± 0.06% reported in ViT paper Table 2 (appendix). This is the published state-of-the-art on CIFAR-10 as of 2025 (PapersWithCode SOTA). Paper: Dosovitskiy et al. 2021, ICLR 2021, arxiv:2010.11929.

Community99.52026Source
2ViT-L/16 (JFT-300M)

Vision Transformer ViT-L/16, pre-trained on JFT-300M and fine-tuned on CIFAR-10. 99.42% reported in ViT paper Table 2 (appendix). Paper: Dosovitskiy et al. 2021, ICLR 2021, arxiv:2010.11929.

Community99.422026Source
3BiT-L (ResNet152x4)

Big Transfer (BiT) ResNet-152x4 large upstream variant, pre-trained on JFT-300M and fine-tuned on CIFAR-10. 99.37% reported in BiT paper Table 1. Paper: Kolesnikov et al., ECCV 2020, arxiv:1912.11370.

Community99.372026Source
4ViT-H/14 (IN-21K)

Vision Transformer ViT-H/14, pre-trained on ImageNet-21K and fine-tuned on CIFAR-10. 99.27% reported in ViT paper Table B1 (appendix). Paper: Dosovitskiy et al. 2021, ICLR 2021, arxiv:2010.11929.

Community99.272026Source
5deit-b-distilled

Near-SOTA on CIFAR-10 with transfer learning.

Editorial99.12025Source
6ViT-L/16 (IN-21K)

Vision Transformer ViT-L/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 99.0% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929.

Community992026Source
7EfficientNet-B8 (NoisyStudent)

NoisyStudent EfficientNet-B8 trained with self-training and noise. 98.7% on CIFAR-10. Paper: Xie et al. 2020, arxiv:1911.04252.

Community98.72026Source
8convnext-v2-base

Strong CNN performance on small-scale benchmark.

Editorial98.72025Source
9ViT-B/16 (IN-21K)

Vision Transformer ViT-B/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 98.13% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929.

Community98.132026Source
10Swin-B

Swin Transformer Base, finetuned from IN-21K pretraining on CIFAR-10. Paper: Liu et al. 2021, arxiv:2103.14030.

Community982026Source
11resnet-50

With Cutout augmentation.

Editorial96.012025Source

Submit a Result