Model card
DeepSolo (with pre-training).
ViTAE-Transformeropen-sourceUnknown paramsDETR-like Transformer decoder with explicit points
DeepSolo with pre-training on Synth150K+MLT17+IC13+IC15. CVPR 2023.
§ 01 · Benchmarks
Every benchmark DeepSolo (with pre-training) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | scut-ctw1500 | Computer Vision · Optical Character Recognition | precision | 92.5% | #1 | 2022-11-19 | source ↗ |
| 02 | scut-ctw1500 | Computer Vision · Optical Character Recognition | f-measure | 89.3% | #4 | 2022-11-19 | source ↗ |
| 03 | scut-ctw1500 | Computer Vision · Optical Character Recognition | recall | 86.3% | #4 | 2022-11-19 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where DeepSolo (with pre-training) actually performs.
§ 03 · Papers
1 paper with results for DeepSolo (with pre-training).
- 2022-11-19· Computer Vision· 3 results
DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
3
results
3 of 3 rows marked verified.