← Back to Browse
Track How Models Improved Over Time
See historical SOTA progress on classic benchmarks. Compare how different models performed on the same datasets. 1,500+ results across 140+ datasets from the Papers With Code archive.
Data source: paperswithcode/paperswithcode-data
1,500+
Benchmark Results
140+
Datasets Tracked
450+
Models Compared
Text Detection
324 results tracked
Text Recognition
289 results tracked
Document Layout Analysis
178 results tracked
Handwriting Recognition
156 results tracked
Document Classification
143 results tracked
OCR End-to-End
112 results tracked
Scene Text Recognition
98 results tracked
Table Detection
87 results tracked
Mathematical Expression Recognition
65 results tracked
Text Spotting
48 results tracked
About This Data
This data is sourced from the Papers With Code open dataset. It includes historical benchmark results from published papers, allowing you to track how model performance has improved over time on standard academic benchmarks.