Codesota · OCR · Power RankingThe condensed answer · who’s actually best on averageIssue: April 27, 2026
§ 00 · Premise

Which OCR model is best on average?

A single high score is easy to game — train on the test set, hand-tune one document type, publish a paper. Average performance across many benchmarks is harder to fake.

We rank every OCR model that placed on at least 2 of 9 public OCR benchmarks. Then — where we’ve verified the model ourselves — we show our own number next to the public consensus.

§ 01
Per-benchmark percentile

Within each benchmark we rank every model that has a score, then map the rank to a 0–100 percentile (top = 100). This neutralises that CER is lower-better while OmniDoc composite is higher-better — both end up on the same 0–100 axis.

§ 02
Average across coverage

Power score is the unweighted mean of a model's percentiles across the benchmarks where it has a score. We require a minimum of 2 benchmarks — one strong showing isn't enough.

§ 03
Our own column, when we have one

Where CodeSOTA has run its own eval (currently 2 of 21 ranked models), the right-most column shows that score. When the public consensus and our number disagree, that disagreement is the most useful thing on this page.

§ 01 · Ranking

The Power Ranking, 21 models.

Sorted by average percentile across the eight axes. Coverage column is load-bearing — a model on top with 2/8 is making a narrower claim than one on top with 6/8.

Pills below each model show per-benchmark percentile. Copper = top quartile (≥75), grey = middle, faded = bottom quartile.

#ModelPowerCoverageCodeSOTA verifiedPer-benchmark percentile
01Qwen2.5-VL-72B98.52 / 9not yetOCRBench EN 97OCRBench ZH 100
02Gemini 2.5 Pro82.65 / 9not yetOmniDoc 63OCRBench EN 87OCRBench ZH 88MME-VideoOCR 100Thai-OCR 75
03PaddleOCR-VL82.52 / 9not yetOmniDoc 91olmOCR 74
04Qianfan-OCR79.34 / 9not yetOmniDoc 94OCRBench EN 80OCRBench ZH 75olmOCR 68
05PaddleOCR-VL-1.577.52 / 9not yetOmniDoc 97olmOCR 58
06Claude Sonnet 466.52 / 9not yetOCRBench EN 33Thai-OCR 100
07minicpm-v-4.5-8b63.02 / 9not yetOCRBench EN 63OCRBench ZH 63
08Gemini 1.5 Pro60.02 / 9not yetCC-OCR 100MME-VideoOCR 20
09dots.ocr 3B59.52 / 9not yetOmniDoc 66olmOCR 53
10sail-vl2-8b58.52 / 9not yetOCRBench EN 67OCRBench ZH 50
11GPT-4o53.34 / 9not yetOCRBench EN 77CC-OCR 25MME-VideoOCR 40KITAB 71
12MinerU 2.552.52 / 9not yetOmniDoc 84olmOCR 21
13GPT-4o Mini47.02 / 9not yetOCRBench EN 37KITAB 57
14claude-3.5-sonnet45.52 / 9not yetOCRBench EN 53OCRBench ZH 38
15Qwen2.5-VL 72B40.02 / 9not yetMME-VideoOCR 80Thai-OCR 0
16Qwen2-VL-72B36.52 / 9not yetOCRBench EN 60OCRBench ZH 13
17Mistral OCR 334.52 / 994.9 %Internal acc3.7 %CodeSOTA CER7.1 %CodeSOTA WEROmniDoc 22olmOCR 47
18InternVL2.5-78B34.02 / 9not yetOCRBench EN 43OCRBench ZH 25
19gpt-4o-202428.52 / 9not yetOCRBench EN 57OCRBench ZH 0
20Qwen2.5-VL 32B25.02 / 9not yetMME-VideoOCR 0Thai-OCR 50
21mistral-ocr-251211.02 / 91.22 p/spages/sOmniDoc 19OCRBench EN 3
Tab 1 · Power score = mean of per-benchmark percentiles. Coverage gate ≥ 2. CodeSOTA-verified column shows our own numbers when we have run the model in-house.
§ 02 · Why a second column

Public benchmarks aren’t enough.

Three problems compound. One: popular OCR benchmarks (OmniDoc, OCRBench, olmOCR) are easy to overfit — six months after a paper ships, the test set is in the next training run. Two: they miss the document types that actually pay rent — Polish invoices, German handwritten medical forms, scanned legacy PDFs with deliberate redactions. Three: a vendor’s self-reported score is a marketing artefact until somebody else runs the same eval.

Our verified column closes the third gap. The first two we close with a hold-out architecture: methodology and sample items are public, the actual test set rotates quarterly and stays private — so even when our questions eventually leak into a training corpus, they’re no longer the questions we’re using.

Currently 2 of 21 models on this page have a CodeSOTA-verified score. Expanding that coverage is the work.

§ 03 · Request

Want a model verified against your docs?

If you’re evaluating OCR for production and a model on this list doesn’t have a CodeSOTA-verified score, tell us. We’ll prioritise what real practitioners are about to deploy over what arXiv published last week.