Model card
CoCa.
Googleopen-sourceUnknown paramsImage encoder + cross-attention + causal decoder
Contrastive Captioner. Trained on JFT-3B and LAION-2B. CIDEr 143.6 on COCO NoCaps. 2022. Source: arxiv:2205.01068.
§ 01 · Benchmarks
Every benchmark CoCa has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | COCO Captions | Multimodal · Image Captioning | CIDEr | 143.60 | #2 | 2022-05-02 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 03 · Papers
1 paper with results for CoCa.
- 2022-05-02· Multimodal· 1 result
CoCa: Contrastive Captioners are Image-Text Foundation Models
§ 04 · Related models
Other Google models scored on Codesota.
Gemini 2.5 Pro
16 results · 3 SOTA
Gemini 3 Pro
Undisclosed params · 13 results · 2 SOTA
Gemini 1.5 Pro
12 results · 1 SOTA
Gemini 3.1 Pro
3 results · 1 SOTA
ViT-H/14
632M params · 2 results · 1 SOTA
CoCa (finetuned)
2.1B params · 1 result · 1 SOTA
Gemini 2.0 Flash
1 result · 1 SOTA
Gemini 3.1 Pro Preview
1 result · 1 SOTA
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.