224,316 chest radiographs from 65,240 patients with 14 pathology labels. Includes uncertainty labels and expert radiologist annotations for validation set. The gold standard for chest X-ray classification.
7 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | auroc |
|---|---|---|---|---|---|
| 01 | CheXpert AUC MaximizerOSS | Stanford | Dec 2025 | stanford-leaderboard | 93 |
| 02 | BioViLOSS | Microsoft | Dec 2025 | microsoft-research | 89.10 |
| 03 | CheXzeroOSS | Harvard/MIT | Dec 2025 | research-paper | 88.60 |
| 04 | GLoRIAOSS | Stanford | Dec 2025 | research-paper | 88.20 |
| 05 | MedCLIPOSS | Research | Dec 2025 | research-paper | 87.80 |
| 06 | TorchXRayVisionOSS | Cohen Lab | Dec 2025 | github-readme | 87.40 |
| 07 | DenseNet-121 (Chest X-ray)OSS | Research | Dec 2025 | research-paper | 86.50 |
Each row below marks a model that broke the previous record on auroc. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.