Codesota · General · Vision-Language Models · RefCOCO / RefCOCO+ / RefCOCOg (overall)Tasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

RefCOCO / RefCOCO+ / RefCOCOg (referring-expression visual grounding datasets on MS COCO).

RefCOCO / RefCOCO+ / RefCOCOg are a family of referring-expression (visual grounding) benchmarks built on MS COCO images. Each dataset pairs natural-language referring expressions with target object instances (bounding boxes) so models can localize the described object in the image. Key characteristics: RefCOCO — ~142,209 expressions for ~50,000 object instances in 19,994 COCO images (short, concise expressions; split into train/val/testA/testB). RefCOCO+ — ~141,564 expressions for ~49,856 objects in 19,992 images; similar to RefCOCO but location/absolute-position words are banned (encourages appearance-based descriptions). RefCOCOg — ~85,474 (longer, more complex) expressions for ~54,822 objects in 26,711 images (collected with different protocol; expressions average much longer than RefCOCO/RefCOCO+). These datasets are widely used to evaluate referring expression comprehension / visual grounding / vision-language localization models. (Information from the original papers and dataset releases: Yu et al. (ECCV/ArXiv) and Mao et al. (CVPR/ArXiv), and standard dataset metadata / TFDS / HF dataset entries.)

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies