Codesota · Computer Vision · OCR · OCRBenchTasks/Computer Vision/OCR
OCR · benchmark dataset · ENGLISH

OCRBench: On the Hidden Mystery of OCR in Large Multimodal Models.

OCRBench is a comprehensive evaluation benchmark designed to assess the OCR capabilities of Large Multimodal Models. It comprises five components: Text Recognition, SceneText-Centric VQA, Document-Oriented VQA, Key Information Extraction, and Handwritten Mathematical Expression Recognition. The benchmark includes 1000 question-answer pairs, and all the answers undergo manual verification and correction to ensure a more precise evaluation.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
Score · higher is better
Score· primary
1 row
#ModelOrgSubmittedPaper / codeScore
01HunyuanOCR (1B)Nov 2025HunyuanOCR Technical Report · code860
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on Score. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · Score
  1. Nov 24, 2025HunyuanOCR (1B)860
Fig 3 · SOTA-setting models only. 1 entries span Nov 2025 Nov 2025.
§ 04 · Literature

1 paper
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

  • HunyuanOCR Technical Report
    Hunyuan Vision TeamPengyuan LyuXingyu WanGengluo LiShangpin PengWeinong WangLiang WuHuawen ShenYu ZhouCanhui TangQi YangQiming PengBin LuoHower YangHouwen PengHongming YangSenhao XieBinghong WuMana YangSergey WangRaccoon LiuDick ZhuJie JiangLinusHan HuChengquan Zhang
    Nov 2025·HunyuanOCR (1B)
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies