Codesota · Models · AIMv2-3BApple1 results · 1 benchmarks
Model card

AIMv2-3B.

Appleopen-source2.7B paramsVision Transformer (Autoregressive Pre-trained)

Multimodal autoregressive pre-training of large vision encoder. 2.7B params, patch14, 448px resolution. Trained with image+text autoregressive objective on proprietary data. Released Nov 2024. Paper: arxiv:2411.14402.

§ 01 · Benchmarks

Every benchmark AIMv2-3B has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01ImageNet-1KComputer Vision · Image Classificationtop-1-accuracy89.5%#4/20source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where AIMv2-3B actually performs.

Computer Vision
1
benchmark
avg rank #4.0
§ 05 · Sources & freshness

Where these numbers come from.

arxiv-paper
1
result
0 of 1 rows marked verified.