CIFAR-10

Unknown

60K 32x32 color images in 10 classes. Classic small-scale image classification benchmark with 50K training and 10K test images.

Benchmark Stats

Models7
Papers7
Metrics1

SOTA History

accuracy

accuracy

Higher is better

RankModelSourceScoreYearPaper
1deit-b-distilled

Near-SOTA on CIFAR-10 with transfer learning.

Editorial99.12025Source
2ViT-L/16 (IN-21K)

Vision Transformer ViT-L/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 99.0% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929.

Community992026Source
3EfficientNet-B8 (NoisyStudent)

NoisyStudent EfficientNet-B8 trained with self-training and noise. 98.7% on CIFAR-10. Paper: Xie et al. 2020, arxiv:1911.04252.

Community98.72026Source
4convnext-v2-base

Strong CNN performance on small-scale benchmark.

Editorial98.72025Source
5ViT-B/16 (IN-21K)

Vision Transformer ViT-B/16, pretrained on ImageNet-21K and finetuned on CIFAR-10. 98.13% reported in ViT paper. Paper: Dosovitskiy et al. 2021, arxiv:2010.11929.

Community98.132026Source
6Swin-B

Swin Transformer Base, finetuned from IN-21K pretraining on CIFAR-10. Paper: Liu et al. 2021, arxiv:2103.14030.

Community982026Source
7resnet-50

With Cutout augmentation.

Editorial96.012025Source

Submit a Result

CIFAR-10 Leaderboard | CodeSOTA | CodeSOTA