Codesota · Models · UniTable LargeGeorgia Tech (Peng et al.)2 results · 1 benchmarks
Model card

UniTable Large.

Georgia Tech (Peng et al.)open-sourceUnknown paramsViT encoder + autoregressive decoder; self-supervised pretraining on unannotated tabular images

Unified framework for table structure, cell content, and bbox via language modeling objective. Achieves SOTA on PubTabNet, FinTabNet, SynthTabNet. Published Mar 2024.

§ 01 · Benchmarks

Every benchmark UniTable Large has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01pubtabnetComputer Vision · Table Recognitionteds-struct97.9%#2/142024-03-07source ↗
02pubtabnetComputer Vision · Table Recognitionteds-all-samples96.5%#8/162024-03-07source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where UniTable Large actually performs.

Computer Vision
1
benchmark
avg rank #5.0
§ 03 · Papers

1 paper with results for UniTable Large.

  1. 2024-03-07· Computer Vision· 2 results

    UniTable: Towards a Unified Framework for Table Recognition via Self-Supervised Pretraining

§ 05 · Sources & freshness

Where these numbers come from.

arxiv
2
results
2 of 2 rows marked verified.