General OCR Capabilities
Comprehensive benchmarks covering multiple aspects of OCR performance.
4
Datasets
92
Results
overall-en-private
Canonical metric
Canonical Benchmark
OCRBench v2
Tests 8 core OCR capabilities across 23 tasks. Evaluates LMMs on text recognition, referring, extraction.
Primary metric: overall-en-private
Top 10
Leading models on OCRBench v2.
| Rank | Model | overall-zh-private | Year | Source |
|---|---|---|---|---|
| 1 | Qwen2.5-VL-72B | 63.7 | 2025 | paper |
| 2 | seed-1.6-vision | 62.2 | 2025 | paper |
| 3 | gemini-25-pro | 62.2 | 2025 | paper |
| 4 | Qwen2.5-VL-72B | 61.5 | 2025 | paper |
| 5 | qwen3-omni-30b | 61.3 | 2025 | paper |
| 6 | nemotron-nano-v2-vl | 61.2 | 2025 | paper |
| 7 | Qianfan-OCR | 60.8 | 2026 | paper |
| 8 | gemini-25-pro | 59.3 | 2025 | paper |
| 9 | minicpm-v-4.5-8b | 58.8 | 2025 | paper |
| 10 | sail-vl2-8b | 57.6 | 2025 | paper |
What were you looking for on General OCR Capabilities?
Didn't find the model, metric, or dataset you needed? Tell us in one line. We read every message and reply within 48 hours.
All datasets
4 datasets tracked for this task.
Related tasks
Other tasks in Computer Vision.
Reply within 48 hours · No newsletter
Didn't find what you came for?
Still looking for something on General OCR Capabilities? A missing model, a stale score, a benchmark we should cover — drop it here and we'll handle it.
Real humans read every message. We track what people are asking for and prioritize accordingly.