Codesota · Models1,870 models indexed · 34 match filter
Editorial · Models
Every model, measured.
Start with a research area, drill into a vendor, or page through the full index. Only models with at least one benchmark score appear — a model without a recorded score can’t be ranked.
Vendor:Areas overviewUnknown · 509speakleash · 253OpenAI · 75Google · 67Research · 52Qwen · 47Alibaba · 43Anthropic · 40Microsoft · 34Mistral · 30Meta · 29DeepSeek · 25google · 19meta-llama · 19mistralai · 19Meta AI · 15Academic · 14CYFRAGOVPL · 14Zhipu AI · 12SpeakLeash · 10internlm · 10xAI · 10ByteDance · 9Baidu · 8PLLuM · 8ibm-granite · 8microsoft · 8 · 7Alibaba Cloud · 7Google DeepMind · 7Remek · 7allenai · 7utter-project · 7CohereForAI · 6Microsoft Research · 6MiniMax · 6NVIDIA · 6Salesforce · 6Shanghai AI Lab · 601-ai · 5Amazon · 5Mistral AI · 5Moonshot AI · 5NousResearch · 5THUML · 5deepseek-ai · 5Cohere · 4DeepMind · 4Facebook AI · 4Meituan · 4Stanford · 4THUDM · 4UC San Diego · 4VikParuchuri · 4gguf-iq · 4nvidia · 4openchat · 4tiiuae · 4Allen AI · 3BAAI · 3Du et al. · 3Fudan University · 3IDEA Research · 3Liao et al. · 3Moonshot.AI · 3Nam Tuan Ly / NII · 3OPI-PG · 3OpenDataLab · 3ViCoS Lab Ljubljana · 3Xiaomi · 3Zhao et al. · 3gguf · 3gguf11bv30 · 3gguf7bv30 · 3upstage · 3347yth03847tyhy03847yt · 2AAAI 2024 · 2Castorini (Waterloo) · 2Fang et al. · 2German Cancer Research Center (DKFZ) · 2Google / UNC · 2HIT & iFLYTEK · 2HuggingFaceH4 · 2IBM Research · 2Independent · 2Jina AI · 2Liao et al. (USTC) · 2LlamaIndex · 2Meta AI / FAIR · 2MiniMaxAI · 2MonkeyOCR · 2NVIDIA (MONAI) · 2Nanjing University · 2Nanonets · 2Nexusflow · 2Nondzu · 2OpenGVLab · 2RedNote HILab · 2Sarvam AI · 2Simular AI · 2Su et al. · 2TeeZee · 2Ultralytics · 2University of Leicester · 2Voicelab · 2Wan et al. (Baidu) · 2Zhang et al. · 2Zheng et al. · 2Ziyan Huang et al. · 2alpindale · 2cjvt · 2h2oai · 2meditsolutions · 2openGPT-X · 2teknium · 2AAAI 2020 · 1AAAI 2023 · 1Adobe Research · 1Alibaba Qwen · 1Alibaba iDST · 1Alibaba/Qwen · 1Amazon Web Services · 1Anonymous (ECCV 2024) · 1Anonymous (arXiv 2023) · 1Anonymous (arXiv 2025) · 1Anonymous / ACL community · 1Anonymous / arxiv preprint · 1Anysphere · 1Apple · 1AssemblyAI · 1Audio Research · 1BAAI (Beijing Academy of AI) · 1BAAI / PKU · 1BRIDO Authors · 1Baidu PaddlePaddle · 1Baidu Qianfan · 1BigCode · 1BigCode / Salesforce · 1Biology · 1CASIA / UCAS · 1CLIP-based · 1CMU · 1CUHK / HIT · 1CVPR 2019 · 1CVPR 2020 · 1CVPR 2021 · 1CW · 1Case Western Reserve University · 1ChatDoc · 1Chen et al. · 1Chen et al. (JHU) · 1Chen, Zhang et al. · 1Cheng et al. · 1Cognition · 1Cohen Lab · 1CohereLabs · 1Columbia University · 1Community · 1Coqui AI · 1DAIR-Group · 1DCASE · 1DFKI / TU Kaiserslautern · 1DMLC · 1DeepL SE · 1DeepMind / TU Warsaw · 1ETH Zurich · 1East China Normal University · 1Edresson Casanova et al. · 1Emergence AI · 1Extend · 1FAIR & UW · 1FSOFT AI Lab · 1Fudan University / Alibaba · 1Fujitake · 1Georgia Tech (Peng et al.) · 1Ghent University · 1Google (Open Source) · 1Google AI · 1Google Brain · 1Google Cloud · 1Google Research · 1Google/CMU · 1Hanvon_WuHan · 1Harvard/MIT · 1Hikvision Research Institute · 1Huawei · 1HuggingFaceTB · 1ICCV 2019 · 1ICT, Chinese Academy of Sciences · 1IDEA-Research · 1IFLYTEK / USTC (Zhang et al.) · 1IIT Bombay LEAP-OCR · 1IJCAI 2025 · 1JD Explore Academy · 1JaidedAI · 1Jiahao Lyu et al., Fudan University · 1Jiang et al. · 1KAIST · 1KAIST / NAVER · 1Kakao · 1Kim et al. · 1Knowledgator · 1LGAI-EXAONE · 1LandingAI · 1Layer 6 AI · 1LightOn · 1Longhuang Wu et al. · 1MBZUAI · 1Meta AI / UIUC · 1Meta AI / WSU · 1Microsoft STCA AIC · 1Mila · 1Mila / Intel · 1Mila / Valence · 1Momenta · 1MultiOn · 1NEC / UIUC · 1NUS · 1NVIDIA / NeMo · 1NVIDIA / Suno · 1NYU · 1NYU / Google · 1Nixtla · 1Oxford / Twitter · 1PAII Insight Team · 1PJLab & Tsinghua · 1Ping An Life Insurance · 1PriorLabs (University of Freiburg) · 1RedNote · 1Reducto · 1Research (IDEA Research) · 1SFU · 1SJTU · 1SUTD · 1Saifullah et al. · 1Scylla Technologies · 1SenseTime · 1Sensetime / Sense-X · 1Sentence-Transformers · 1ServiceNow · 1ServiceNow-AI · 1Sogou OCR team · 1SonarSource · 1Stanford ML Group · 1Stanford NLP · 1StepFun · 1Studio Ousia · 1SumHiS Authors · 1TPAMI 2021 · 1TPAMI 2022 · 1Takaya Kawakatsu · 1TeamQuest · 1TildeAI · 1Timm · 1Tongji University / Ant Group · 1TriSum Authors · 1Tsinghua · 1Tsinghua / MEGVII · 1Tsinghua / MILA · 1Tsinghua University · 1Tsinghua University / Baidu · 1U. Toronto · 1UBTECH · 1UC Berkeley · 1UC Davis · 1UCLA / Columbia · 1UCLA / Columbia University · 1USTC / Microsoft Research Asia · 1UTTER · 1UW-Madison / Microsoft · 1Uber AI · 1Uber Technologies · 1University Medical Center Hamburg-Eppendorf et al. · 1Unknown (ICDAR 2021 participant) · 1Upstage AI · 1Verified XiaoPAI · 1ViTAE-Transformer · 1Voyage AI · 1Wang et al. (University of Toronto) · 1Weizmann Institute · 1Xing et al. · 1Xingwen Cao et al. (LIESMARS, Wuhan University) · 1Yale NLP · 1Yan et al. · 1Yongkun Du et al. · 1Zhang et al. (HCIILAB) · 1Zhong and Gao · 1Zhou et al. · 1Zhu et al. · 1berkeley-nest · 1community · 1datalab-to · 1deepcogito · 1djstrong · 1dnhkng · 1dreamgen · 1jxm · 1lex-hue · 1lmsys · 1mlabonne · 1moonshotai · 1openai · 1piotr-ai · 1scikit-learn · 1swiss-ai · 1szymonrucinski · 1
§ 01 · Microsoft models
34 models from Microsoft · page 1 of 1.
| # | Model | Vendor | Parameters | Architecture | SOTA | Benchmarks | Results |
|---|---|---|---|---|---|---|---|
| 001 | DeBERTa-v3-large | Microsoft | 304M | DeBERTa-v3-large | 4 | 5 | 6 |
| 002 | Phi-4 | Microsoft | 14B | transformer | 2 | 3 | 17 |
| 003 | RAD-DINO | Microsoft | — | Self-supervised ViT | 1 | 2 | 2 |
| 004 | VALL-E 2 | Microsoft | Unknown | Neural codec language model (EnCodec tokens) | 1 | 2 | 2 |
| 005 | NaturalSpeech 3 | Microsoft | ~500M | Factorized codec + non-AR diffusion | 1 | 1 | 1 |
| 006 | Swin Transformer V2 Large | Microsoft | 197M | Hierarchical Vision Transformer | 1 | 1 | 1 |
| 007 | WavLM Large (SV) | Microsoft | 316M | WavLM Large + ECAPA-TDNN head | 1 | 1 | 1 |
| 008 | WizardLM-2-8x22b | Microsoft | — | — | 1 | 7 | |
| 009 | E5-Mistral-7B-instruct | Microsoft | 7B | Mistral-7B (LLM-based embedding) | 3 | 3 | |
| 010 | ResNet-50 | Microsoft | 25M | CNN | 3 | 3 | |
| 011 | UniXcoder | Microsoft | Unknown | Transformer encoder-decoder | 3 | 3 | |
| 012 | CodeBERT | Microsoft | Unknown | BERT | 2 | 2 | |
| 013 | Florence-2-Large | Microsoft | — | — | 1 | 2 | |
| 014 | KOSMOS-2.5 | Microsoft | — | — | 1 | 2 | |
| 015 | LightGBM | Microsoft | — | Gradient Boosted Trees (leaf-wise) | 2 | 2 | |
| 016 | ResNet-152 | Microsoft | 60M | CNN | 2 | 2 | |
| 017 | Azure Document Intelligence | Microsoft | Unknown | Managed layout + OCR extraction service | 1 | 1 | |
| 018 | Azure OCR | Microsoft | — | Cloud OCR Service | 1 | 1 | |
| 019 | BEiT-3 (ViT-L) | Microsoft | Unknown | Multiway Transformer (ViT-L/14) | 1 | 1 | |
| 020 | BioViL | Microsoft | — | Vision-Language Transformer | 1 | 1 | |
| 021 | CodeBERT | Microsoft | — | BERT pretrained on code + NL | 1 | 1 | |
| 022 | DeBERTa (ensemble) | Microsoft | — | — | 1 | 1 | |
| 023 | DiT-Base | Microsoft | — | Vision Transformer (self-supervised) | 1 | 1 | |
| 024 | DiT-Large | Microsoft | Unknown | Document Image Transformer Large | 1 | 1 | |
| 025 | GraphCodeBERT | Microsoft | 125M | transformer | 1 | 1 | |
| 026 | LayoutLMv3 | Microsoft | Unknown | Multimodal Transformer (text + layout + image) | 1 | 1 | |
| 027 | Pengi | Microsoft | ~300M | CLAP audio encoder + GPT-2 decoder | 1 | 1 | |
| 028 | Phi-4 14B | Microsoft | 14B | — | 1 | 1 | |
| 029 | Swin Transformer Large | Microsoft | 197M | Hierarchical Vision Transformer | 1 | 1 | |
| 030 | Swin-L + UperNet | Microsoft | Unknown | Swin Transformer Large backbone + UperNet head | 1 | 1 | |
| 031 | UFO (GPT-4V) | Microsoft | Unknown | UI-Focused agent with dual-agent architecture on GPT-4V | 1 | 1 | |
| 032 | VALL-E | Microsoft | ~400M | Neural codec LM (EnCodec tokens) | 1 | 1 | |
| 033 | mDeBERTa-v3-base | Microsoft | 86M | DeBERTa-v3 (multilingual) | 1 | 1 | |
| 034 | swin_large.ms_in22k_ft_in1k | Microsoft | — | Swin-L, IN22K pre-train, IN1K fine-tune | 1 | 1 |