Codesota · Models1,870 models indexed · 768 match filter
Editorial · Models
Every model, measured.
Start with a research area, drill into a vendor, or page through the full index. Only models with at least one benchmark score appear — a model without a recorded score can’t be ranked.
Vendor:Areas overviewUnknown · 509speakleash · 253OpenAI · 75Google · 67Research · 52Qwen · 47Alibaba · 43Anthropic · 40Microsoft · 34Mistral · 30Meta · 29DeepSeek · 25google · 19meta-llama · 19mistralai · 19Meta AI · 15Academic · 14CYFRAGOVPL · 14Zhipu AI · 12SpeakLeash · 10internlm · 10xAI · 10ByteDance · 9Baidu · 8PLLuM · 8ibm-granite · 8microsoft · 8 · 7Alibaba Cloud · 7Google DeepMind · 7Remek · 7allenai · 7utter-project · 7CohereForAI · 6Microsoft Research · 6MiniMax · 6NVIDIA · 6Salesforce · 6Shanghai AI Lab · 601-ai · 5Amazon · 5Mistral AI · 5Moonshot AI · 5NousResearch · 5THUML · 5deepseek-ai · 5Cohere · 4DeepMind · 4Facebook AI · 4Meituan · 4Stanford · 4THUDM · 4UC San Diego · 4VikParuchuri · 4gguf-iq · 4nvidia · 4openchat · 4tiiuae · 4Allen AI · 3BAAI · 3Du et al. · 3Fudan University · 3IDEA Research · 3Liao et al. · 3Moonshot.AI · 3Nam Tuan Ly / NII · 3OPI-PG · 3OpenDataLab · 3ViCoS Lab Ljubljana · 3Xiaomi · 3Zhao et al. · 3gguf · 3gguf11bv30 · 3gguf7bv30 · 3upstage · 3347yth03847tyhy03847yt · 2AAAI 2024 · 2Castorini (Waterloo) · 2Fang et al. · 2German Cancer Research Center (DKFZ) · 2Google / UNC · 2HIT & iFLYTEK · 2HuggingFaceH4 · 2IBM Research · 2Independent · 2Jina AI · 2Liao et al. (USTC) · 2LlamaIndex · 2Meta AI / FAIR · 2MiniMaxAI · 2MonkeyOCR · 2NVIDIA (MONAI) · 2Nanjing University · 2Nanonets · 2Nexusflow · 2Nondzu · 2OpenGVLab · 2RedNote HILab · 2Sarvam AI · 2Simular AI · 2Su et al. · 2TeeZee · 2Ultralytics · 2University of Leicester · 2Voicelab · 2Wan et al. (Baidu) · 2Zhang et al. · 2Zheng et al. · 2Ziyan Huang et al. · 2alpindale · 2cjvt · 2h2oai · 2meditsolutions · 2openGPT-X · 2teknium · 2AAAI 2020 · 1AAAI 2023 · 1Adobe Research · 1Alibaba Qwen · 1Alibaba iDST · 1Alibaba/Qwen · 1Amazon Web Services · 1Anonymous (ECCV 2024) · 1Anonymous (arXiv 2023) · 1Anonymous (arXiv 2025) · 1Anonymous / ACL community · 1Anonymous / arxiv preprint · 1Anysphere · 1Apple · 1AssemblyAI · 1Audio Research · 1BAAI (Beijing Academy of AI) · 1BAAI / PKU · 1BRIDO Authors · 1Baidu PaddlePaddle · 1Baidu Qianfan · 1BigCode · 1BigCode / Salesforce · 1Biology · 1CASIA / UCAS · 1CLIP-based · 1CMU · 1CUHK / HIT · 1CVPR 2019 · 1CVPR 2020 · 1CVPR 2021 · 1CW · 1Case Western Reserve University · 1ChatDoc · 1Chen et al. · 1Chen et al. (JHU) · 1Chen, Zhang et al. · 1Cheng et al. · 1Cognition · 1Cohen Lab · 1CohereLabs · 1Columbia University · 1Community · 1Coqui AI · 1DAIR-Group · 1DCASE · 1DFKI / TU Kaiserslautern · 1DMLC · 1DeepL SE · 1DeepMind / TU Warsaw · 1ETH Zurich · 1East China Normal University · 1Edresson Casanova et al. · 1Emergence AI · 1Extend · 1FAIR & UW · 1FSOFT AI Lab · 1Fudan University / Alibaba · 1Fujitake · 1Georgia Tech (Peng et al.) · 1Ghent University · 1Google (Open Source) · 1Google AI · 1Google Brain · 1Google Cloud · 1Google Research · 1Google/CMU · 1Hanvon_WuHan · 1Harvard/MIT · 1Hikvision Research Institute · 1Huawei · 1HuggingFaceTB · 1ICCV 2019 · 1ICT, Chinese Academy of Sciences · 1IDEA-Research · 1IFLYTEK / USTC (Zhang et al.) · 1IIT Bombay LEAP-OCR · 1IJCAI 2025 · 1JD Explore Academy · 1JaidedAI · 1Jiahao Lyu et al., Fudan University · 1Jiang et al. · 1KAIST · 1KAIST / NAVER · 1Kakao · 1Kim et al. · 1Knowledgator · 1LGAI-EXAONE · 1LandingAI · 1Layer 6 AI · 1LightOn · 1Longhuang Wu et al. · 1MBZUAI · 1Meta AI / UIUC · 1Meta AI / WSU · 1Microsoft STCA AIC · 1Mila · 1Mila / Intel · 1Mila / Valence · 1Momenta · 1MultiOn · 1NEC / UIUC · 1NUS · 1NVIDIA / NeMo · 1NVIDIA / Suno · 1NYU · 1NYU / Google · 1Nixtla · 1Oxford / Twitter · 1PAII Insight Team · 1PJLab & Tsinghua · 1Ping An Life Insurance · 1PriorLabs (University of Freiburg) · 1RedNote · 1Reducto · 1Research (IDEA Research) · 1SFU · 1SJTU · 1SUTD · 1Saifullah et al. · 1Scylla Technologies · 1SenseTime · 1Sensetime / Sense-X · 1Sentence-Transformers · 1ServiceNow · 1ServiceNow-AI · 1Sogou OCR team · 1SonarSource · 1Stanford ML Group · 1Stanford NLP · 1StepFun · 1Studio Ousia · 1SumHiS Authors · 1TPAMI 2021 · 1TPAMI 2022 · 1Takaya Kawakatsu · 1TeamQuest · 1TildeAI · 1Timm · 1Tongji University / Ant Group · 1TriSum Authors · 1Tsinghua · 1Tsinghua / MEGVII · 1Tsinghua / MILA · 1Tsinghua University · 1Tsinghua University / Baidu · 1U. Toronto · 1UBTECH · 1UC Berkeley · 1UC Davis · 1UCLA / Columbia · 1UCLA / Columbia University · 1USTC / Microsoft Research Asia · 1UTTER · 1UW-Madison / Microsoft · 1Uber AI · 1Uber Technologies · 1University Medical Center Hamburg-Eppendorf et al. · 1Unknown (ICDAR 2021 participant) · 1Upstage AI · 1Verified XiaoPAI · 1ViTAE-Transformer · 1Voyage AI · 1Wang et al. (University of Toronto) · 1Weizmann Institute · 1Xing et al. · 1Xingwen Cao et al. (LIESMARS, Wuhan University) · 1Yale NLP · 1Yan et al. · 1Yongkun Du et al. · 1Zhang et al. (HCIILAB) · 1Zhong and Gao · 1Zhou et al. · 1Zhu et al. · 1berkeley-nest · 1community · 1datalab-to · 1deepcogito · 1djstrong · 1dnhkng · 1dreamgen · 1jxm · 1lex-hue · 1lmsys · 1mlabonne · 1moonshotai · 1openai · 1piotr-ai · 1scikit-learn · 1swiss-ai · 1szymonrucinski · 1
§ 01 · Computer Vision models
768 models in Computer Vision · page 12 of 16.
| # | Model | Vendor | Parameters | Architecture | SOTA | Benchmarks | Results |
|---|---|---|---|---|---|---|---|
| 551 | DoPTA-HR (512×512) | — | — | Transformer | 1 | 1 | |
| 552 | DocBert [DOCBERT] | Unknown | Unknown | Unknown | 1 | 1 | |
| 553 | DocFormer large | Unknown | Unknown | Unknown | 1 | 1 | |
| 554 | DocFormerBASE | Unknown | Unknown | Unknown | 1 | 1 | |
| 555 | DocLayout-YOLO | Unknown | Unknown | Unknown | 1 | 1 | |
| 556 | DocXClassifier-B | Unknown | Unknown | Unknown | 1 | 1 | |
| 557 | DocXClassifier-FPN | Saifullah et al. | — | CNN with Feature Pyramid Network | 1 | 1 | |
| 558 | DocXClassifier-L | Unknown | Unknown | Unknown | 1 | 1 | |
| 559 | Docling | IBM Research | Unknown | Open-source document parsing toolkit (layout + OCR + table) | 1 | 1 | |
| 560 | Dolphin | Research | — | — | 1 | 1 | |
| 561 | Dolphin-1.5 | ByteDance | — | — | 1 | 1 | |
| 562 | Dolphin-v2 | ByteDance | — | — | 1 | 1 | |
| 563 | Donut | Unknown | Unknown | Unknown | 1 | 1 | |
| 564 | Dots OCR 1.5 | RedNote HILab | Unknown | OCR-specialised open-weight VLM | 1 | 1 | |
| 565 | EK-Net++ | Research | — | — | 1 | 1 | |
| 566 | ESALE | East China Normal University | 125M | transformer | 1 | 1 | |
| 567 | EVA-02 (ViT-L/14+) | BAAI | 304M | EVA-02 ViT-L/14+, public data only | 1 | 1 | |
| 568 | EVA-02-L | BAAI | Unknown | EVA-02 Large + Cascade Mask R-CNN | 1 | 1 | |
| 569 | EVA-02-L (LVIS) | BAAI | Unknown | EVA-02 Large + ViTDet | 1 | 1 | |
| 570 | Easter2.0 | Unknown | Unknown | Unknown | 1 | 1 | |
| 571 | Eff-GNN + Word2Vec [word2vec] | Unknown | Unknown | Unknown | 1 | 1 | |
| 572 | Eff-GNN + Word2Vec [word2vec] + Image Embedding | Unknown | Unknown | Unknown | 1 | 1 | |
| 573 | EfficientDet-D7x | — | EfficientNet+BiFPN | 1 | 1 | ||
| 574 | EfficientNet-B0 | 5.3M | CNN | 1 | 1 | ||
| 575 | EfficientNetV2-L | 120M | CNN | 1 | 1 | ||
| 576 | Extend | Extend | Unknown | Document parsing + extraction API | 1 | 1 | |
| 577 | FCENet | CVPR 2021 | — | — | 1 | 1 | |
| 578 | FPHR Paragraph Level (~145 dpi) | Unknown | Unknown | Unknown | 1 | 1 | |
| 579 | FPHR+Aug Line Level (~145 dpi) | Unknown | Unknown | Unknown | 1 | 1 | |
| 580 | FPHR+Aug Paragraph Level (~145 dpi) | Unknown | Unknown | Unknown | 1 | 1 | |
| 581 | Flor | Unknown | Unknown | Unknown | 1 | 1 | |
| 582 | FreeReal+DBNet | SJTU | — | — | 1 | 1 | |
| 583 | GPT-4o (Anchored) | OpenAI | — | Multimodal LLM | 1 | 1 | |
| 584 | Gemini Flash 2 | — | Multimodal LLM | 1 | 1 | ||
| 585 | Gemma 3 | — | — | 1 | 1 | ||
| 586 | GoogLeNet | — | — | 1 | 1 | ||
| 587 | Google Cloud Document AI | Google Cloud | Unknown | Managed document understanding service (layout parser) | 1 | 1 | |
| 588 | GraphCodeBERT | Microsoft | 125M | transformer | 1 | 1 | |
| 589 | GraphCodeBERT+AdvFusion | University of Leicester | 125M | transformer | 1 | 1 | |
| 590 | GreedyRel (query: method + article + steps titles) | Unknown | Unknown | Unknown | 1 | 1 | |
| 591 | GreedyRel (query: method + article titles) | Unknown | Unknown | Unknown | 1 | 1 | |
| 592 | GreedyRel (query: method title) | Unknown | Unknown | Unknown | 1 | 1 | |
| 593 | GreedyRel (query: step + method + article titles) | Unknown | — | extractive | 1 | 1 | |
| 594 | GreedyRel (query: step + method titles) | Unknown | Unknown | Unknown | 1 | 1 | |
| 595 | GreedyRel (query: step title) | Unknown | Unknown | Unknown | 1 | 1 | |
| 596 | Grounding DINO | IDEA Research | Unknown | Open-Set Object Detection with Grounded Pre-Training | 1 | 1 | |
| 597 | IGTR-AR | Yongkun Du et al. | Unknown | Instruction-Guided Transformer (Auto-Regressive variant) | 1 | 1 | |
| 598 | Infinity-Parser 7B | Unknown | 7B | Vision-Language Model | 1 | 1 | |
| 599 | InternImage-H | Shanghai AI Lab | Unknown | Deformable Convolution v3 + Cascade Mask R-CNN | 1 | 1 | |
| 600 | InternImage-H (OneFormer) | PJLab & Tsinghua | — | — | 1 | 1 |