Next-generation STR benchmark with 4M labeled + 10M unlabeled images. Accuracy drops 33-48% vs standard benchmarks (IIIT5K etc.), exposing real-world challenges like artistic text, multi-oriented, and occluded text.
8 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | CLIP4STR-B | Research | Mar 2026 | arxiv-paper | 70.80 |
| 02 | PARSeqOSS | Research | Mar 2026 | arxiv-paper | 67.80 |
| 03 | CLIP4STROSS | Research | Mar 2026 | arxiv | 67.30 |
| 04 | LPV-SOSS | Research | Mar 2026 | arxiv-paper | 65.10 |
| 05 | PARSeqOSS | Research | Mar 2026 | arxiv | 63.80 |
| 06 | MAERec-SOSS | Research | Mar 2026 | arxiv-paper | 62.40 |
| 07 | MATRN | Research | Mar 2026 | arxiv | 61.20 |
| 08 | CDistNetOSS | Research | Mar 2026 | arxiv-paper | 56.20 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.