Codesota · Computer Vision · Scene Text Recognition · icdar-2003Tasks/Computer Vision/Scene Text Recognition
Scene Text Recognition · benchmark dataset · 2020 · EN

icdar-2003.

Dataset from Papers With Code

Saturated benchmark

Benchmark abandoned or no longer evaluated by the community

Submit a result
§ 01 · Leaderboard

Best published scores.

12 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
accuracy · higher is better
accuracy· primary
12 rows
#ModelOrgSubmittedPaper / codeaccuracy
01Yet Another Text RecognizerJul 2021Why You Should Try the Real Data for the Scene Text Reco… · code97.10
02SIGA_TMar 2022Self-supervised Implicit Glyph Attention for Text Recogn… · code97
03SATRNOct 2019On Recognizing Texts of Arbitrary Shapes with 2D Self-At… · code96.70
04SAFLJan 2022SAFL: A Self-Attention Scene Text Recognizer with Focal … · code95
05DANDec 2019Decoupled Attention Network for Text Recognition · code95
06CSTRFeb 2021Revisiting Classification Perspective on Scene Text Reco… · code94.80
07Baek et al.Apr 2019What Is Wrong With Scene Text Recognition Model Comparis… · code94.40
08ViTSTRMay 2021Vision Transformer for Fast and Efficient Scene Text Rec… · code94.30
09AONNov 2017AON: Towards Arbitrarily-Oriented Text Recognition · code91.50
10RAREMar 2016Robust Scene Text Recognition with Automatic Rectificati… · code90.10
11STAR-NetSep 2016papers-with-code · code89.90
12CRNNJul 2015An End-to-End Trainable Neural Network for Image-based S… · code89.40
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

6 steps
of state of the art.

Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · accuracy
  1. Jul 21, 2015CRNN89.40
  2. Mar 12, 2016RARE90.10
  3. Nov 12, 2017AON91.50
  4. Apr 3, 2019Baek et al.94.40
  5. Oct 10, 2019SATRN96.70
  6. Jul 29, 2021Yet Another Text Recognizer97.10
Fig 3 · SOTA-setting models only. 6 entries span Jul 2015 Jul 2021.
§ 04 · Literature

11 papers
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies