Model card
TextFuseNet (ResNeXt-101).
UnknownunknownUnknown paramsUnknown
Imported from Papers With Code
§ 01 · Benchmarks
Every benchmark TextFuseNet (ResNeXt-101) has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | ICDAR 2015 | Computer Vision · Scene Text Detection | f-measure | 92.2% | #1 | 2020-05-17 | source ↗ |
| 02 | ICDAR 2015 | Computer Vision · Scene Text Detection | precision | 94.0% | #1 | 2020-05-17 | source ↗ |
| 03 | icdar-2013 | Computer Vision · Scene Text Detection | f-measure | 94.6% | #1 | 2020-05-17 | source ↗ |
| 04 | CTW1500 | Computer Vision · Scene Text Detection | f-measure | 86.6% | #1 | 2020-01-01 | source ↗ |
| 05 | CTW1500 | Computer Vision · Scene Text Detection | recall | 85.4% | #1 | 2020-01-01 | source ↗ |
| 06 | ICDAR 2015 | Computer Vision · Scene Text Detection | recall | 90.6% | #2 | 2020-05-17 | source ↗ |
| 07 | icdar-2013 | Computer Vision · Scene Text Detection | precision | 97.3% | #2 | 2020-05-17 | source ↗ |
| 08 | icdar-2013 | Computer Vision · Scene Text Detection | recall | 92.1% | #2 | 2020-05-17 | source ↗ |
| 09 | CTW1500 | Computer Vision · Scene Text Detection | precision | 87.8% | #3 | 2020-01-01 | source ↗ |
| 10 | ic19-art | Computer Vision · Scene Text Detection | h-mean | 78.6% | #4 | 2020-05-17 | source ↗ |
| 11 | scut-ctw1500 | Computer Vision · Optical Character Recognition | f-measure | 87.4% | #6 | 2020-05-17 | source ↗ |
| 12 | scut-ctw1500 | Computer Vision · Optical Character Recognition | precision | 89.7% | #6 | 2020-05-17 | source ↗ |
| 13 | scut-ctw1500 | Computer Vision · Optical Character Recognition | recall | 85.1% | #7 | 2020-05-17 | source ↗ |
| 14 | Total-Text | Computer Vision · Scene Text Detection | f-measure | 87.5% | #10 | 2020-05-17 | source ↗ |
| 15 | Total-Text | Computer Vision · Scene Text Detection | recall | 85.8% | #10 | 2020-05-17 | source ↗ |
| 16 | Total-Text | Computer Vision · Scene Text Detection | precision | 89.2% | #16 | 2020-05-17 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where TextFuseNet (ResNeXt-101) actually performs.
§ 03 · Papers
2 papers with results for TextFuseNet (ResNeXt-101).
- 2020-05-17· Computer Vision· 13 results
TextFuseNet: Scene Text Detection with Richer Fused Features
- 2020-01-01· Computer Vision· 3 results
TextFuseNet: Scene Text Detection with Richer Fused Features
§ 04 · Related models
Other Unknown models scored on Codesota.
fglihai
Unknown params · 6 results · 1 SOTA
CLIP4STR-L
Unknown params · 1 result · 1 SOTA
USYD NLP_CS29-2
Unknown params · 6 results
Corner-based Region Proposals
Unknown params · 3 results
EAST + VGG16
Unknown params · 3 results
SSTD
Unknown params · 3 results
TextBoxes++_MS
Unknown params · 3 results
WordSup (VGG16-synth-coco)
Unknown params · 3 results
§ 05 · Sources & freshness
Where these numbers come from.
papers-with-code
13
results
arxiv
3
results
16 of 16 rows marked verified. · first result 2020-01-01, latest 2020-05-17.