Model card
TAPAS-large.
Googleopen-source340M paramsBERT-large (table-augmented)
TAPAS: Weakly Supervised Table Parsing via Pre-training. ACL 2020.
§ 01 · Benchmarks
Every benchmark TAPAS-large has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | WikiTableQuestions | Natural Language Processing · Table Question Answering | accuracy | 48.7% | #3 | 2020-04-06 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where TAPAS-large actually performs.
§ 03 · Papers
1 paper with results for TAPAS-large.
- 2020-04-06· Natural Language Processing· 1 result
TAPAS: Weakly Supervised Table Parsing via Pre-training
§ 04 · Related models
Other Google models scored on Codesota.
Gemini 2.5 Pro
16 results · 3 SOTA
Gemini 3 Pro
Undisclosed params · 13 results · 2 SOTA
Gemini 1.5 Pro
12 results · 1 SOTA
Gemini 3.1 Pro
3 results · 1 SOTA
ViT-H/14
632M params · 2 results · 1 SOTA
CoCa (finetuned)
2.1B params · 1 result · 1 SOTA
Gemini 2.0 Flash
1 result · 1 SOTA
Gemini 3.1 Pro Preview
1 result · 1 SOTA
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.