Codesota · Models · DeepSolo (with pre-training)ViTAE-Transformer3 results · 1 benchmarks
Model card

DeepSolo (with pre-training).

ViTAE-Transformeropen-sourceUnknown paramsDETR-like Transformer decoder with explicit points

DeepSolo with pre-training on Synth150K+MLT17+IC13+IC15. CVPR 2023.

§ 01 · Benchmarks

Every benchmark DeepSolo (with pre-training) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01scut-ctw1500Computer Vision · Optical Character Recognitionprecision92.5%#1/182022-11-19source ↗
02scut-ctw1500Computer Vision · Optical Character Recognitionf-measure89.3%#4/192022-11-19source ↗
03scut-ctw1500Computer Vision · Optical Character Recognitionrecall86.3%#4/182022-11-19source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where DeepSolo (with pre-training) actually performs.

Computer Vision
1
benchmark
avg rank #3.0
§ 03 · Papers

1 paper with results for DeepSolo (with pre-training).

  1. 2022-11-19· Computer Vision· 3 results

    DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.