CheXpert

Unknown

224,316 chest radiographs from 65,240 patients with 14 pathology labels. Includes uncertainty labels and expert radiologist annotations for validation set. The gold standard for chest X-ray classification.

Benchmark Stats

Models7
Papers7
Metrics1

SOTA History

Not enough data to show trend.

auroc

auroc

Higher is better

RankModelSourceScoreYearPaper
1chexpert-auc-maximizer

Mean AUC across 5 competition pathologies. Competition-winning ensemble.

Editorial932025Source
2biovil

Microsoft's biomedical vision-language model.

Editorial89.12025Source
3chexzero

Zero-shot performance without task-specific training. Expert-level on multiple pathologies.

Editorial88.62025Source
4gloria

Global-Local Representations. Zero-shot evaluation.

Editorial88.22025Source
5medclip

Decoupled contrastive learning. Zero-shot transfer.

Editorial87.82025Source
6torchxrayvision

Pre-trained on multiple datasets. Strong transfer learning baseline.

Editorial87.42025Source
7densenet-121-cxr

Baseline DenseNet-121. Trained on CheXpert training set.

Editorial86.52025Source

Submit a Result