Codesota · Models1,870 models indexed · 768 match filter
Editorial · Models
Every model, measured.
Start with a research area, drill into a vendor, or page through the full index. Only models with at least one benchmark score appear — a model without a recorded score can’t be ranked.
Vendor:Areas overviewUnknown · 509speakleash · 253OpenAI · 75Google · 67Research · 52Qwen · 47Alibaba · 43Anthropic · 40Microsoft · 34Mistral · 30Meta · 29DeepSeek · 25google · 19meta-llama · 19mistralai · 19Meta AI · 15Academic · 14CYFRAGOVPL · 14Zhipu AI · 12SpeakLeash · 10internlm · 10xAI · 10ByteDance · 9Baidu · 8PLLuM · 8ibm-granite · 8microsoft · 8 · 7Alibaba Cloud · 7Google DeepMind · 7Remek · 7allenai · 7utter-project · 7CohereForAI · 6Microsoft Research · 6MiniMax · 6NVIDIA · 6Salesforce · 6Shanghai AI Lab · 601-ai · 5Amazon · 5Mistral AI · 5Moonshot AI · 5NousResearch · 5THUML · 5deepseek-ai · 5Cohere · 4DeepMind · 4Facebook AI · 4Meituan · 4Stanford · 4THUDM · 4UC San Diego · 4VikParuchuri · 4gguf-iq · 4nvidia · 4openchat · 4tiiuae · 4Allen AI · 3BAAI · 3Du et al. · 3Fudan University · 3IDEA Research · 3Liao et al. · 3Moonshot.AI · 3Nam Tuan Ly / NII · 3OPI-PG · 3OpenDataLab · 3ViCoS Lab Ljubljana · 3Xiaomi · 3Zhao et al. · 3gguf · 3gguf11bv30 · 3gguf7bv30 · 3upstage · 3347yth03847tyhy03847yt · 2AAAI 2024 · 2Castorini (Waterloo) · 2Fang et al. · 2German Cancer Research Center (DKFZ) · 2Google / UNC · 2HIT & iFLYTEK · 2HuggingFaceH4 · 2IBM Research · 2Independent · 2Jina AI · 2Liao et al. (USTC) · 2LlamaIndex · 2Meta AI / FAIR · 2MiniMaxAI · 2MonkeyOCR · 2NVIDIA (MONAI) · 2Nanjing University · 2Nanonets · 2Nexusflow · 2Nondzu · 2OpenGVLab · 2RedNote HILab · 2Sarvam AI · 2Simular AI · 2Su et al. · 2TeeZee · 2Ultralytics · 2University of Leicester · 2Voicelab · 2Wan et al. (Baidu) · 2Zhang et al. · 2Zheng et al. · 2Ziyan Huang et al. · 2alpindale · 2cjvt · 2h2oai · 2meditsolutions · 2openGPT-X · 2teknium · 2AAAI 2020 · 1AAAI 2023 · 1Adobe Research · 1Alibaba Qwen · 1Alibaba iDST · 1Alibaba/Qwen · 1Amazon Web Services · 1Anonymous (ECCV 2024) · 1Anonymous (arXiv 2023) · 1Anonymous (arXiv 2025) · 1Anonymous / ACL community · 1Anonymous / arxiv preprint · 1Anysphere · 1Apple · 1AssemblyAI · 1Audio Research · 1BAAI (Beijing Academy of AI) · 1BAAI / PKU · 1BRIDO Authors · 1Baidu PaddlePaddle · 1Baidu Qianfan · 1BigCode · 1BigCode / Salesforce · 1Biology · 1CASIA / UCAS · 1CLIP-based · 1CMU · 1CUHK / HIT · 1CVPR 2019 · 1CVPR 2020 · 1CVPR 2021 · 1CW · 1Case Western Reserve University · 1ChatDoc · 1Chen et al. · 1Chen et al. (JHU) · 1Chen, Zhang et al. · 1Cheng et al. · 1Cognition · 1Cohen Lab · 1CohereLabs · 1Columbia University · 1Community · 1Coqui AI · 1DAIR-Group · 1DCASE · 1DFKI / TU Kaiserslautern · 1DMLC · 1DeepL SE · 1DeepMind / TU Warsaw · 1ETH Zurich · 1East China Normal University · 1Edresson Casanova et al. · 1Emergence AI · 1Extend · 1FAIR & UW · 1FSOFT AI Lab · 1Fudan University / Alibaba · 1Fujitake · 1Georgia Tech (Peng et al.) · 1Ghent University · 1Google (Open Source) · 1Google AI · 1Google Brain · 1Google Cloud · 1Google Research · 1Google/CMU · 1Hanvon_WuHan · 1Harvard/MIT · 1Hikvision Research Institute · 1Huawei · 1HuggingFaceTB · 1ICCV 2019 · 1ICT, Chinese Academy of Sciences · 1IDEA-Research · 1IFLYTEK / USTC (Zhang et al.) · 1IIT Bombay LEAP-OCR · 1IJCAI 2025 · 1JD Explore Academy · 1JaidedAI · 1Jiahao Lyu et al., Fudan University · 1Jiang et al. · 1KAIST · 1KAIST / NAVER · 1Kakao · 1Kim et al. · 1Knowledgator · 1LGAI-EXAONE · 1LandingAI · 1Layer 6 AI · 1LightOn · 1Longhuang Wu et al. · 1MBZUAI · 1Meta AI / UIUC · 1Meta AI / WSU · 1Microsoft STCA AIC · 1Mila · 1Mila / Intel · 1Mila / Valence · 1Momenta · 1MultiOn · 1NEC / UIUC · 1NUS · 1NVIDIA / NeMo · 1NVIDIA / Suno · 1NYU · 1NYU / Google · 1Nixtla · 1Oxford / Twitter · 1PAII Insight Team · 1PJLab & Tsinghua · 1Ping An Life Insurance · 1PriorLabs (University of Freiburg) · 1RedNote · 1Reducto · 1Research (IDEA Research) · 1SFU · 1SJTU · 1SUTD · 1Saifullah et al. · 1Scylla Technologies · 1SenseTime · 1Sensetime / Sense-X · 1Sentence-Transformers · 1ServiceNow · 1ServiceNow-AI · 1Sogou OCR team · 1SonarSource · 1Stanford ML Group · 1Stanford NLP · 1StepFun · 1Studio Ousia · 1SumHiS Authors · 1TPAMI 2021 · 1TPAMI 2022 · 1Takaya Kawakatsu · 1TeamQuest · 1TildeAI · 1Timm · 1Tongji University / Ant Group · 1TriSum Authors · 1Tsinghua · 1Tsinghua / MEGVII · 1Tsinghua / MILA · 1Tsinghua University · 1Tsinghua University / Baidu · 1U. Toronto · 1UBTECH · 1UC Berkeley · 1UC Davis · 1UCLA / Columbia · 1UCLA / Columbia University · 1USTC / Microsoft Research Asia · 1UTTER · 1UW-Madison / Microsoft · 1Uber AI · 1Uber Technologies · 1University Medical Center Hamburg-Eppendorf et al. · 1Unknown (ICDAR 2021 participant) · 1Upstage AI · 1Verified XiaoPAI · 1ViTAE-Transformer · 1Voyage AI · 1Wang et al. (University of Toronto) · 1Weizmann Institute · 1Xing et al. · 1Xingwen Cao et al. (LIESMARS, Wuhan University) · 1Yale NLP · 1Yan et al. · 1Yongkun Du et al. · 1Zhang et al. (HCIILAB) · 1Zhong and Gao · 1Zhou et al. · 1Zhu et al. · 1berkeley-nest · 1community · 1datalab-to · 1deepcogito · 1djstrong · 1dnhkng · 1dreamgen · 1jxm · 1lex-hue · 1lmsys · 1mlabonne · 1moonshotai · 1openai · 1piotr-ai · 1scikit-learn · 1swiss-ai · 1szymonrucinski · 1
§ 01 · Computer Vision models
768 models in Computer Vision · page 15 of 16.
| # | Model | Vendor | Parameters | Architecture | SOTA | Benchmarks | Results |
|---|---|---|---|---|---|---|---|
| 701 | STaR-8B | Unknown | — | — | 1 | 1 | |
| 702 | SVTR-B (Base) | Unknown | Unknown | Unknown | 1 | 1 | |
| 703 | SVTR-L (Large) | Unknown | Unknown | Unknown | 1 | 1 | |
| 704 | SVTR-S (Small) | Unknown | Unknown | Unknown | 1 | 1 | |
| 705 | SVTR-T (Tiny) | Unknown | Unknown | Unknown | 1 | 1 | |
| 706 | SVTRv2-B | Du et al. | Unknown | SVTR Base + Multi-Size Resizing + Feature Rearrangement + Semantic Guidance (CTC) | 1 | 1 | |
| 707 | SVTRv2-S | Du et al. | Unknown | SVTR Small + Multi-Size Resizing + Feature Rearrangement + Semantic Guidance (CTC) | 1 | 1 | |
| 708 | SVTRv2-T | Du et al. | Unknown | SVTR Tiny + Multi-Size Resizing + Feature Rearrangement + Semantic Guidance (CTC) | 1 | 1 | |
| 709 | SciFive-large | Unknown | Unknown | Unknown | 1 | 1 | |
| 710 | SenseTime Basemodel | SenseTime | — | — | 1 | 1 | |
| 711 | Siamese Small-E-Czech (Electra-small) | Unknown | Unknown | Unknown | 1 | 1 | |
| 712 | SigNet-F (SVM) | Unknown | Unknown | Unknown | 1 | 1 | |
| 713 | SoViT-400M/14 | 400M | Compute-optimal ViT shape | 1 | 1 | ||
| 714 | SoViT-400m/14 | Google DeepMind | 400M | Vision Transformer (Shape-Optimized) | 1 | 1 | |
| 715 | StrucTexTv2 (large) | Unknown | Unknown | Unknown | 1 | 1 | |
| 716 | Surya | VikParuchuri | — | — | 1 | 1 | |
| 717 | Swin Transformer Large | Microsoft | 197M | Hierarchical Vision Transformer | 1 | 1 | |
| 718 | Swin-L (Cascade R-CNN) | Microsoft Research | — | — | 1 | 1 | |
| 719 | Swin-L + UperNet | Microsoft | Unknown | Swin Transformer Large backbone + UperNet head | 1 | 1 | |
| 720 | T-REX (Phi-4) | Unknown | — | — | 1 | 1 | |
| 721 | TILT-Base | Unknown | Unknown | Unknown | 1 | 1 | |
| 722 | TILT-Large | Unknown | Unknown | Unknown | 1 | 1 | |
| 723 | TSRFormer | Unknown | Unknown | Unknown | 1 | 1 | |
| 724 | Tab-PoT | Unknown | Unknown | Unknown | 1 | 1 | |
| 725 | TabSQLify (col+row) | Unknown | Unknown | Unknown | 1 | 1 | |
| 726 | TableNet | Unknown | Unknown | Unknown | 1 | 1 | |
| 727 | TextBlockV2 (GPT-2) | Jiahao Lyu et al., Fudan University | Unknown | GPT-2 LM decoder for detection-free scene text spotting | 1 | 1 | |
| 728 | TextCohesion | Unknown | Unknown | Unknown | 1 | 1 | |
| 729 | TextMonkey | Huawei | — | — | 1 | 1 | |
| 730 | Thinker | UBTECH | — | — | 1 | 1 | |
| 731 | TrOCR-small 62M | Unknown | Unknown | Unknown | 1 | 1 | |
| 732 | TransOCR | Unknown | Unknown | Unknown | 1 | 1 | |
| 733 | Transfer Learning from AlexNet, VGG-16, GoogLeNet and ResNet50 | Unknown | Unknown | Unknown | 1 | 1 | |
| 734 | Transfer Learning from VGG16 trained on Imagenet | Unknown | Unknown | Unknown | 1 | 1 | |
| 735 | TransferDoc | Unknown | Unknown | Unknown | 1 | 1 | |
| 736 | Transformer + CNN | Unknown | Unknown | Unknown | 1 | 1 | |
| 737 | Transformer w/ CNN (+synth) | Unknown | Unknown | Unknown | 1 | 1 | |
| 738 | USM (COCO TS + ICDAR–2013) | Unknown | Unknown | Unknown | 1 | 1 | |
| 739 | UniTabNet | Anonymous / ACL community | Unknown | Vision-language model bridging image encoder and text decoder for table structure parsing | 1 | 1 | |
| 740 | VLAWE | Unknown | Unknown | Unknown | 1 | 1 | |
| 741 | VLCDoC | Unknown | Unknown | Unknown | 1 | 1 | |
| 742 | ViT-22B/14 | 22B | Scaled Vision Transformer 22B | 1 | 1 | ||
| 743 | ViT-Adapter-L | Nanjing University | — | — | 1 | 1 | |
| 744 | ViT-G/14 | 1.8B | Vision Transformer | 1 | 1 | ||
| 745 | ViT-L/16 | 307M | Vision Transformer | 1 | 1 | ||
| 746 | ViTDet-H | Meta AI | Unknown | Plain ViT-Huge + Cascade Mask R-CNN | 1 | 1 | |
| 747 | YOLO11x | Ultralytics | Unknown | YOLO v11 Extra-Large | 1 | 1 | |
| 748 | YOLOv8-DocLayNet | Research | Unknown | YOLOv8 fine-tuned on DocLayNet | 1 | 1 | |
| 749 | coatnet_2_rw_224.sw_in12k_ft_in1k | — | CoAtNet-2 RW, IN12K -> IN1K fine-tune | 1 | 1 | ||
| 750 | convnext_base.fb_in22k_ft_in1k | Meta AI | — | ConvNeXt-B, IN22K pre-train, IN1K fine-tune | 1 | 1 |