CheXpert
Unknown
224,316 chest radiographs from 65,240 patients with 14 pathology labels. Includes uncertainty labels and expert radiologist annotations for validation set. The gold standard for chest X-ray classification.
Benchmark Stats
Models7
Papers7
Metrics1
SOTA History
Not enough data to show trend.
auroc
auroc
Higher is better
| Rank | Model | Source | Score | Year | Paper |
|---|---|---|---|---|---|
| 1 | chexpert-auc-maximizer Mean AUC across 5 competition pathologies. Competition-winning ensemble. | Editorial | 93 | 2025 | Source |
| 2 | biovil Microsoft's biomedical vision-language model. | Editorial | 89.1 | 2025 | Source |
| 3 | chexzero Zero-shot performance without task-specific training. Expert-level on multiple pathologies. | Editorial | 88.6 | 2025 | Source |
| 4 | gloria Global-Local Representations. Zero-shot evaluation. | Editorial | 88.2 | 2025 | Source |
| 5 | medclip Decoupled contrastive learning. Zero-shot transfer. | Editorial | 87.8 | 2025 | Source |
| 6 | torchxrayvision Pre-trained on multiple datasets. Strong transfer learning baseline. | Editorial | 87.4 | 2025 | Source |
| 7 | densenet-121-cxr Baseline DenseNet-121. Trained on CheXpert training set. | Editorial | 86.5 | 2025 | Source |