Codesota · Computer Vision · Optical Character Recognition · benchmarking-chinese-text-recognition:-datasets,-bTasks/Computer Vision/Optical Character Recognition
Optical Character Recognition · benchmark dataset · 2020 · EN

benchmarking-chinese-text-recognition:-datasets,-b.

Dataset from Papers With Code

Submit a result
§ 01 · Leaderboard

Best published scores.

7 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
accuracy · higher is better
accuracy· primary
7 rows
#ModelOrgSubmittedPaper / codeaccuracy
01DTrOCRAug 2023DTrOCR: Decoder-only Transformer for Optical Character R… · code89.60
02DTrOCR 105MAug 2023DTrOCR: Decoder-only Transformer for Optical Character R… · code89.60
03MaskOCR-LJun 2022MaskOCR: Text Recognition with Masked Encoder-Decoder Pr…82.60
04TransOCRJun 2021papers-with-code · code72.80
05SRNMar 2020Towards Accurate Scene Text Recognition with Semantic Re… · code65
06MORANJan 2019A Multi-Object Rectified Attention Network for Scene Tex… · code64.30
07SEEDMay 2020SEED: Semantics Enhanced Encoder-Decoder Framework for S… · code61.20
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

5 steps
of state of the art.

Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · accuracy
  1. Jan 10, 2019MORAN64.30
  2. Mar 27, 2020SRN65
  3. Jun 19, 2021TransOCR72.80
  4. Jun 1, 2022MaskOCR-L82.60
  5. Aug 30, 2023DTrOCR89.60
Fig 3 · SOTA-setting models only. 5 entries span Jan 2019 Aug 2023.
§ 04 · Literature

5 papers
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
benchmarking-chinese-text-recognition:-datasets,-b — Optical Character Recognition benchmark · Codesota | CodeSOTA