RefCOCO / RefCOCO+ / RefCOCOg are a family of referring-expression (visual grounding) benchmarks built on MS COCO images. Each dataset pairs natural-language referring expressions with target object instances (bounding boxes) so models can localize the described object in the image. Key characteristics: RefCOCO — ~142,209 expressions for ~50,000 object instances in 19,994 COCO images (short, concise expressions; split into train/val/testA/testB). RefCOCO+ — ~141,564 expressions for ~49,856 objects in 19,992 images; similar to RefCOCO but location/absolute-position words are banned (encourages appearance-based descriptions). RefCOCOg — ~85,474 (longer, more complex) expressions for ~54,822 objects in 26,711 images (collected with different protocol; expressions average much longer than RefCOCO/RefCOCO+). These datasets are widely used to evaluate referring expression comprehension / visual grounding / vision-language localization models. (Information from the original papers and dataset releases: Yu et al. (ECCV/ArXiv) and Mao et al. (CVPR/ArXiv), and standard dataset metadata / TFDS / HF dataset entries.)
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.