Codesota · Models · ViT-Adapter-L (BEiT-3)Microsoft Research0 results · 0 benchmarks
Model card

ViT-Adapter-L (BEiT-3).

Microsoft Researchopen-sourceUnknown paramsViT-L with spatial prior adapter + BEiT-3 pre-training + Mask2Former head

ViT-Adapter bridges the gap between plain ViT and hierarchical backbones for dense predictions. BEiT-3 pre-trained ViT-L via ViT-Adapter sets SOTA of 62.8 MS mIoU on ADE20K val. ICLR 2023 Spotlight.

§ 01 · Benchmarks

No recorded benchmark results yet.

This model is in the registry but doesn’t have any benchmark_results rows yet. If you have a score, submit it →

Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 04 · Related models

Other Microsoft Research models scored on Codesota.

Faster R-CNN
Unknown params · 7 results
Swin-L (Cascade R-CNN)
1 result
DiT-L (Cascade R-CNN)
Unknown params · 0 results
Faster R-CNN (VGG-16)
~137M params · 0 results
LayoutLMv3-Large
Unknown params · 0 results
NaturalSpeech
N/A params · 0 results
NaturalSpeech 3
Unknown params · 0 results
SwinV2-G
0 results