Model card
SeMask-L.
SHI Labsopen-sourceUnknown paramsSwin-L encoder with Semantic Attention + Mask2Former decoder
Incorporates semantic information into the encoder via Semantic Attention at multiple stages. SeMask-L + Mask2Former achieves 49.35 mIoU on ADE20K val. ICCVW 2023.
§ 02 · Benchmarks
No recorded benchmark results yet.
This model is in the registry but doesn’t have any benchmark_results rows yet. If you have a score, submit it →
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 05 · Related models