Codesota · Models · CLIP4STR-H (DFN-5B)Unknown2 results · 2 benchmarks
Model card

CLIP4STR-H (DFN-5B).

UnknownunknownUnknown paramsUnknown

Imported from Papers With Code

§ 01 · Benchmarks

Every benchmark CLIP4STR-H (DFN-5B) has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01svtComputer Vision · Scene Text Recognitionaccuracy99.1%#1/402023-05-23source ↗
02wostComputer Vision · Scene Text Recognition1-1-accuracy90.9%#1/52023-05-23source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where CLIP4STR-H (DFN-5B) actually performs.

Computer Vision
2
benchmarks
avg rank #1.0
§ 03 · Papers

1 paper with results for CLIP4STR-H (DFN-5B).

  1. 2023-05-23· Computer Vision· 2 results

    CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model

§ 04 · Related models

Other Unknown models scored on Codesota.

fglihai
Unknown params · 6 results · 1 SOTA
CLIP4STR-L
Unknown params · 1 result · 1 SOTA
USYD NLP_CS29-2
Unknown params · 6 results
Corner-based Region Proposals
Unknown params · 3 results
EAST + VGG16
Unknown params · 3 results
SSTD
Unknown params · 3 results
TextBoxes++_MS
Unknown params · 3 results
WordSup (VGG16-synth-coco)
Unknown params · 3 results
§ 05 · Sources & freshness

Where these numbers come from.

papers-with-code
2
results
2 of 2 rows marked verified.