Codesota · Models1,870 models indexed · 50 match filter
Editorial · Models
Every model, measured.
Start with a research area, drill into a vendor, or page through the full index. Only models with at least one benchmark score appear — a model without a recorded score can’t be ranked.
Vendor:Areas overviewUnknown · 509speakleash · 253OpenAI · 75Google · 67Research · 52Qwen · 47Alibaba · 43Anthropic · 40Microsoft · 34Mistral · 30Meta · 29DeepSeek · 25google · 19meta-llama · 19mistralai · 19Meta AI · 15Academic · 14CYFRAGOVPL · 14Zhipu AI · 12SpeakLeash · 10internlm · 10xAI · 10ByteDance · 9Baidu · 8PLLuM · 8ibm-granite · 8microsoft · 8 · 7Alibaba Cloud · 7Google DeepMind · 7Remek · 7allenai · 7utter-project · 7CohereForAI · 6Microsoft Research · 6MiniMax · 6NVIDIA · 6Salesforce · 6Shanghai AI Lab · 601-ai · 5Amazon · 5Mistral AI · 5Moonshot AI · 5NousResearch · 5THUML · 5deepseek-ai · 5Cohere · 4DeepMind · 4Facebook AI · 4Meituan · 4Stanford · 4THUDM · 4UC San Diego · 4VikParuchuri · 4gguf-iq · 4nvidia · 4openchat · 4tiiuae · 4Allen AI · 3BAAI · 3Du et al. · 3Fudan University · 3IDEA Research · 3Liao et al. · 3Moonshot.AI · 3Nam Tuan Ly / NII · 3OPI-PG · 3OpenDataLab · 3ViCoS Lab Ljubljana · 3Xiaomi · 3Zhao et al. · 3gguf · 3gguf11bv30 · 3gguf7bv30 · 3upstage · 3347yth03847tyhy03847yt · 2AAAI 2024 · 2Castorini (Waterloo) · 2Fang et al. · 2German Cancer Research Center (DKFZ) · 2Google / UNC · 2HIT & iFLYTEK · 2HuggingFaceH4 · 2IBM Research · 2Independent · 2Jina AI · 2Liao et al. (USTC) · 2LlamaIndex · 2Meta AI / FAIR · 2MiniMaxAI · 2MonkeyOCR · 2NVIDIA (MONAI) · 2Nanjing University · 2Nanonets · 2Nexusflow · 2Nondzu · 2OpenGVLab · 2RedNote HILab · 2Sarvam AI · 2Simular AI · 2Su et al. · 2TeeZee · 2Ultralytics · 2University of Leicester · 2Voicelab · 2Wan et al. (Baidu) · 2Zhang et al. · 2Zheng et al. · 2Ziyan Huang et al. · 2alpindale · 2cjvt · 2h2oai · 2meditsolutions · 2openGPT-X · 2teknium · 2AAAI 2020 · 1AAAI 2023 · 1Adobe Research · 1Alibaba Qwen · 1Alibaba iDST · 1Alibaba/Qwen · 1Amazon Web Services · 1Anonymous (ECCV 2024) · 1Anonymous (arXiv 2023) · 1Anonymous (arXiv 2025) · 1Anonymous / ACL community · 1Anonymous / arxiv preprint · 1Anysphere · 1Apple · 1AssemblyAI · 1Audio Research · 1BAAI (Beijing Academy of AI) · 1BAAI / PKU · 1BRIDO Authors · 1Baidu PaddlePaddle · 1Baidu Qianfan · 1BigCode · 1BigCode / Salesforce · 1Biology · 1CASIA / UCAS · 1CLIP-based · 1CMU · 1CUHK / HIT · 1CVPR 2019 · 1CVPR 2020 · 1CVPR 2021 · 1CW · 1Case Western Reserve University · 1ChatDoc · 1Chen et al. · 1Chen et al. (JHU) · 1Chen, Zhang et al. · 1Cheng et al. · 1Cognition · 1Cohen Lab · 1CohereLabs · 1Columbia University · 1Community · 1Coqui AI · 1DAIR-Group · 1DCASE · 1DFKI / TU Kaiserslautern · 1DMLC · 1DeepL SE · 1DeepMind / TU Warsaw · 1ETH Zurich · 1East China Normal University · 1Edresson Casanova et al. · 1Emergence AI · 1Extend · 1FAIR & UW · 1FSOFT AI Lab · 1Fudan University / Alibaba · 1Fujitake · 1Georgia Tech (Peng et al.) · 1Ghent University · 1Google (Open Source) · 1Google AI · 1Google Brain · 1Google Cloud · 1Google Research · 1Google/CMU · 1Hanvon_WuHan · 1Harvard/MIT · 1Hikvision Research Institute · 1Huawei · 1HuggingFaceTB · 1ICCV 2019 · 1ICT, Chinese Academy of Sciences · 1IDEA-Research · 1IFLYTEK / USTC (Zhang et al.) · 1IIT Bombay LEAP-OCR · 1IJCAI 2025 · 1JD Explore Academy · 1JaidedAI · 1Jiahao Lyu et al., Fudan University · 1Jiang et al. · 1KAIST · 1KAIST / NAVER · 1Kakao · 1Kim et al. · 1Knowledgator · 1LGAI-EXAONE · 1LandingAI · 1Layer 6 AI · 1LightOn · 1Longhuang Wu et al. · 1MBZUAI · 1Meta AI / UIUC · 1Meta AI / WSU · 1Microsoft STCA AIC · 1Mila · 1Mila / Intel · 1Mila / Valence · 1Momenta · 1MultiOn · 1NEC / UIUC · 1NUS · 1NVIDIA / NeMo · 1NVIDIA / Suno · 1NYU · 1NYU / Google · 1Nixtla · 1Oxford / Twitter · 1PAII Insight Team · 1PJLab & Tsinghua · 1Ping An Life Insurance · 1PriorLabs (University of Freiburg) · 1RedNote · 1Reducto · 1Research (IDEA Research) · 1SFU · 1SJTU · 1SUTD · 1Saifullah et al. · 1Scylla Technologies · 1SenseTime · 1Sensetime / Sense-X · 1Sentence-Transformers · 1ServiceNow · 1ServiceNow-AI · 1Sogou OCR team · 1SonarSource · 1Stanford ML Group · 1Stanford NLP · 1StepFun · 1Studio Ousia · 1SumHiS Authors · 1TPAMI 2021 · 1TPAMI 2022 · 1Takaya Kawakatsu · 1TeamQuest · 1TildeAI · 1Timm · 1Tongji University / Ant Group · 1TriSum Authors · 1Tsinghua · 1Tsinghua / MEGVII · 1Tsinghua / MILA · 1Tsinghua University · 1Tsinghua University / Baidu · 1U. Toronto · 1UBTECH · 1UC Berkeley · 1UC Davis · 1UCLA / Columbia · 1UCLA / Columbia University · 1USTC / Microsoft Research Asia · 1UTTER · 1UW-Madison / Microsoft · 1Uber AI · 1Uber Technologies · 1University Medical Center Hamburg-Eppendorf et al. · 1Unknown (ICDAR 2021 participant) · 1Upstage AI · 1Verified XiaoPAI · 1ViTAE-Transformer · 1Voyage AI · 1Wang et al. (University of Toronto) · 1Weizmann Institute · 1Xing et al. · 1Xingwen Cao et al. (LIESMARS, Wuhan University) · 1Yale NLP · 1Yan et al. · 1Yongkun Du et al. · 1Zhang et al. (HCIILAB) · 1Zhong and Gao · 1Zhou et al. · 1Zhu et al. · 1berkeley-nest · 1community · 1datalab-to · 1deepcogito · 1djstrong · 1dnhkng · 1dreamgen · 1jxm · 1lex-hue · 1lmsys · 1mlabonne · 1moonshotai · 1openai · 1piotr-ai · 1scikit-learn · 1swiss-ai · 1szymonrucinski · 1
§ 01 · Medical models
50 models in Medical · page 1 of 1.
| # | Model | Vendor | Parameters | Architecture | SOTA | Benchmarks | Results |
|---|---|---|---|---|---|---|---|
| 001 | TorchXRayVision | Cohen Lab | — | DenseNet-121 / ResNet | 2 | 6 | 6 |
| 002 | DenseNet-121 (Chest X-ray) | Research | 8M | DenseNet-121 | 2 | 4 | 4 |
| 003 | MedNeXt-L | German Cancer Research Center (DKFZ) | 62M | ConvNeXt-based encoder-decoder | 2 | 3 | 3 |
| 004 | CheXzero | Harvard/MIT | — | CLIP-based Vision-Language | 1 | 2 | 2 |
| 005 | ChebGAT-GCN | Academic | — | Chebyshev Spectral GCN + Graph Attention Network | 1 | 1 | 2 |
| 006 | MAACNN | Research | — | CNN | 1 | 2 | 2 |
| 007 | RAD-DINO | Microsoft | — | Self-supervised ViT | 1 | 2 | 2 |
| 008 | STU-Net-H | Ziyan Huang et al. | 1.4B | Scalable U-Net | 1 | 2 | 2 |
| 009 | CheXpert AUC Maximizer | Stanford | — | DenseNet-121 Ensemble | 1 | 1 | 1 |
| 010 | DeepASD | Research | — | Adversary-regularized GNN | 1 | 1 | 1 |
| 011 | SSAE + Softmax (Explainable ASD) | Academic | — | Stacked Sparse Autoencoder + Softmax | 1 | 1 | 1 |
| 012 | SegMamba | Xing et al. | 53M | Mamba-based 3D segmentation | 1 | 1 | 1 |
| 013 | nnU-Net v2 | German Cancer Research Center (DKFZ) | varies (~31M default) | U-Net (self-configuring) | 4 | 4 | |
| 014 | STU-Net-L | Ziyan Huang et al. | 440M | Scalable U-Net | 3 | 3 | |
| 015 | ASD-SWNet | Research | — | Shared-weight CNN | 1 | 2 | |
| 016 | ASDFormer | Research | — | Transformer with Mixture of Experts | 1 | 2 | |
| 017 | BrainTWT | Academic | — | Temporal Random Walk + Transformer | 1 | 2 | |
| 018 | Causal fMRI Model | Academic | — | Causality-inspired Deep Learning | 1 | 2 | |
| 019 | CheXNet | Stanford ML Group | 8M | DenseNet-121 | 2 | 2 | |
| 020 | GCN | Research | — | Graph Convolutional Network | 1 | 2 | |
| 021 | MVS-GCN | Research | — | Multi-view Site Graph Convolutional Network | 1 | 2 | |
| 022 | Random Forest | scikit-learn | — | Random Forest | 2 | 2 | |
| 023 | SVM with Connectivity Features | Research | — | Support Vector Machine | 1 | 2 | |
| 024 | Swin UNETR | NVIDIA (MONAI) | 62M | Swin Transformer encoder + U-Net decoder | 2 | 2 | |
| 025 | TransUNet | Chen et al. (JHU) | 105M | ViT encoder + U-Net decoder | 2 | 2 | |
| 026 | U-Mamba (Bot) | Wang et al. (University of Toronto) | 59M | Mamba (SSM) + U-Net | 2 | 2 | |
| 027 | UNETR | NVIDIA (MONAI) | 93M | Pure ViT encoder + U-Net decoder | 2 | 2 | |
| 028 | nnFormer | Zhou et al. | 150M | Interleaved Transformer for volumetric segmentation | 2 | 2 | |
| 029 | AE-FCN | Research | — | Autoencoder + Fully Connected Network | 1 | 1 | |
| 030 | AL-Negat | Research | — | Graph Neural Network | 1 | 1 | |
| 031 | Abraham Connectomes | Research | — | Connectome Analysis | 1 | 1 | |
| 032 | BioViL | Microsoft | — | Vision-Language Transformer | 1 | 1 | |
| 033 | BrainGNN | Research | — | Graph Neural Network | 1 | 1 | |
| 034 | BrainGT | Research | — | Graph Transformer | 1 | 1 | |
| 035 | ConVIRT | NYU | — | Contrastive Vision-Language | 1 | 1 | |
| 036 | Deep Learning (Heinsfeld) | Research | — | Deep Neural Network | 1 | 1 | |
| 037 | GLoRIA | Stanford | — | Vision-Language (Local + Global) | 1 | 1 | |
| 038 | LightM-UNet | Liao et al. | 4M | Lightweight Mamba + U-Net | 1 | 1 | |
| 039 | MADE-for-ASD | Academic | — | Multi-Atlas Deep Ensemble Network | 1 | 1 | |
| 040 | MCBERT | Research | — | Multi-modal CNN-BERT | 1 | 1 | |
| 041 | MSalNET | Academic | — | Multi-site Adversarial Learning Network | 1 | 1 | |
| 042 | MedCLIP | Research | — | CLIP-based Vision-Language | 1 | 1 | |
| 043 | Multi-Atlas DNN | Research | — | Deep Neural Network | 1 | 1 | |
| 044 | Multi-Task Transformer | Research | — | Transformer | 1 | 1 | |
| 045 | PHGCL-DDGFormer | Research | — | Graph Transformer | 1 | 1 | |
| 046 | Plymouth DL Model | Research | — | Deep Learning with XAI | 1 | 1 | |
| 047 | RGTNet | Academic | — | Residual Graph Transformer | 1 | 1 | |
| 048 | ResNet-50 (Chest X-ray) | Research | 25M | ResNet-50 | 1 | 1 | |
| 049 | SAM-Med3D | University Medical Center Hamburg-Eppendorf et al. | 387M | SAM adapted for 3D volumes | 1 | 1 | |
| 050 | SegVol | BAAI (Beijing Academy of AI) | 90M | SAM-based volumetric segmentation | 1 | 1 |