Model card
TESTR.
UnknownunknownUnknown paramsUnknown
Imported from Papers With Code
§ 01 · Benchmarks
Every benchmark TESTR has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | scut-ctw1500 | Computer Vision · Optical Character Recognition | f-measure-full-lexicon | 81.5% | #3 | 2022-04-05 | source ↗ |
| 02 | ICDAR 2015 | Computer Vision · Scene Text Detection | recall | 89.7% | #5 | — | source ↗ |
| 03 | ICDAR 2015 | Computer Vision · Scene Text Detection | f-measure-strong-lexicon | 85.2% | #6 | 2022-04-05 | source ↗ |
| 04 | Total-Text | Computer Vision · Scene Text Detection | f-measure-full-lexicon | 83.9% | #7 | 2022-04-05 | source ↗ |
| 05 | ICDAR 2015 | Computer Vision · Scene Text Detection | f-measure | 90.0% | #7 | — | source ↗ |
| 06 | ICDAR 2015 | Computer Vision · Scene Text Detection | f-measure-weak-lexicon | 79.4% | #8 | 2022-04-05 | source ↗ |
| 07 | ICDAR 2015 | Computer Vision · Scene Text Detection | f-measure-generic-lexicon | 73.6% | #9 | 2022-04-05 | source ↗ |
| 08 | Total-Text | Computer Vision · Scene Text Detection | f-measure-no-lexicon | 73.3% | #9 | 2022-04-05 | source ↗ |
| 09 | scut-ctw1500 | Computer Vision · Optical Character Recognition | f-measure-no-lexicon | 56.0% | #9 | 2022-04-05 | source ↗ |
| 10 | ICDAR 2015 | Computer Vision · Scene Text Detection | precision | 90.3% | #15 | — | source ↗ |
| 11 | inverse-text | Computer Vision · Optical Character Recognition | f-measure-no-lexicon | 34.2% | #16 | 2022-04-05 | source ↗ |
| 12 | inverse-text | Computer Vision · Optical Character Recognition | f-measure-full-lexicon | 41.6% | #16 | 2022-04-05 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for TESTR.
- 2022-04-05· Computer Vision· 9 results
Text Spotting Transformers
§ 04 · Related models
Other Unknown models scored on Codesota.
fglihai
Unknown params · 6 results · 1 SOTA
CLIP4STR-L
Unknown params · 1 result · 1 SOTA
USYD NLP_CS29-2
Unknown params · 6 results
Corner-based Region Proposals
Unknown params · 3 results
EAST + VGG16
Unknown params · 3 results
SSTD
Unknown params · 3 results
TextBoxes++_MS
Unknown params · 3 results
WordSup (VGG16-synth-coco)
Unknown params · 3 results
§ 05 · Sources & freshness
Where these numbers come from.
papers-with-code
9
results
github-readme
3
results
9 of 12 rows marked verified.