Codesota · Benchmark · ImageNet Linear ProbeHome/Leaderboards/Vision & Documents/Image Classification/ImageNet Linear Probe
Unknown

ImageNet Linear Probe.

Linear classification on frozen ImageNet-1K features. Used to evaluate representation quality of self-supervised and contrastive models without fine-tuning the backbone.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Top1 Accuracy

Top1 Accuracy is the reported evaluation metric for ImageNet Linear Probe. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for Top1 Accuracyverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01DINOv2 ViT-g/14
Self-supervised pretraining, linear probe on frozen features, ImageNet-1k
paper86.52026Source ↗
02SimCLRv2 (ResNet-152 3x)
SimCLRv2 contrastive SSL, linear probe ImageNet-1k
paper79.82026Source ↗
03MAE ViT-H/14
Masked Autoencoder, linear probe ImageNet-1k
paper76.62026Source ↗

top-1-accuracy

Top 1 Accuracy is the reported evaluation metric for ImageNet Linear Probe. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for top-1-accuracyverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01DINOv2 ViT-g/14
DINOv2 ViT-g/14, self-supervised via distillation. Linear probe on frozen features. Source: facebookresearch/dinov2 README pretrained models table. Paper: Oquab et al. 2023, arxiv:2304.07193.
verified86.52026Source ↗
02DINOv2 ViT-L/14
DINOv2 ViT-L/14, self-supervised via distillation. Linear probe on frozen features. Source: facebookresearch/dinov2 README pretrained models table. Paper: Oquab et al. 2023, arxiv:2304.07193.
verified86.32026Source ↗
03CLIP ViT-L/14
OpenAI CLIP ViT-L/14, contrastive pre-training on 400M image-text pairs. Linear probe on frozen features. 85.3% reported in original CLIP paper (Table 10, Appendix). Paper: Radford et al. 2021, arxiv:2103.00020.
verified85.32026Source ↗
04MAE ViT-H/14
Masked Autoencoder ViT-H/14. Linear probe on frozen features (PyTorch reimplementation). Source: facebookresearch/mae FINETUNE.md linear probing table. Paper: He et al. 2022, arxiv:2111.06377. Note: MAE is optimized for fine-tuning not linear probing; finetuned accuracy is 87.8%.
verified77.22026Source ↗
05MAE ViT-L/16
Masked Autoencoder ViT-L/16. Linear probe on frozen features (PyTorch reimplementation). Source: facebookresearch/mae FINETUNE.md. Paper: He et al. 2022, arxiv:2111.06377.
verified762026Source ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Image Classification