Home / OCR / Benchmarks / chexpert

chexpert

Unknown

OCR benchmark

7
Total Results
7
Models Tested
1
Metrics
2025-12-19
Last Updated

auroc

Higher is better

Rank Model Score Source
1 chexpert-auc-maximizer

Mean AUC across 5 competition pathologies. Competition-winning ensemble.

93 stanford-leaderboard
2 biovil

Microsoft's biomedical vision-language model.

89.1 microsoft-research
3 chexzero

Zero-shot performance without task-specific training. Expert-level on multiple pathologies.

88.6 research-paper
4 gloria

Global-Local Representations. Zero-shot evaluation.

88.2 research-paper
5 medclip

Decoupled contrastive learning. Zero-shot transfer.

87.8 research-paper
6 torchxrayvision

Pre-trained on multiple datasets. Strong transfer learning baseline.

87.4 github-readme
7 densenet-121-cxr

Baseline DenseNet-121. Trained on CheXpert training set.

86.5 research-paper