Codesota · General · Vision-Language Models · GQATasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering.

GQA is a new dataset for visual question answering featuring compositional questions over real-world images. The dataset consists of 22M questions about various day-to-day images, where each image is associated with a scene graph of the objects, attributes and relations. Each question is associated with a structured representation of its semantics, a functional program that specifies the reasoning steps. The dataset is designed to address shortcomings in existing VQA benchmarks by mitigating language priors and conditional biases, enabling fine-grained diagnosis for different question types.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies