Codesota · Multimodal · Visual Question Answering · VQA v2.0Tasks/Multimodal/Visual Question Answering
Visual Question Answering · benchmark dataset · 2017 · EN

Visual Question Answering v2.0.

265K images with 1.1M questions. Balanced dataset to reduce language biases found in v1.

Paper Download datasetSubmit a result
§ 01 · Leaderboard

Best published scores.

7 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
accuracy · higher is better
accuracy· primary
7 rows
#ModelOrgSubmittedPaper / codeaccuracy
01Qwen2-VL 72BOSSAlibabaSep 2024Qwen2-VL: Enhancing Vision-Language Model's Perception o…87.60
02InternVL2-76BOSSShanghai AI LabApr 2024InternVL: Scaling up Vision Foundation Models and Aligni…87.20
03Gemini 1.5 ProAPIGoogleFeb 2024Gemini 1.5: Unlocking multimodal understanding across mi…86.50
04BLIP-2OSSSalesforceJan 2023BLIP-2: Bootstrapping Language-Image Pre-training with F…82.19
05LLaVA-1.5OSSUW-Madison / MicrosoftOct 2023Improved Baselines with Visual Instruction Tuning (LLaVA…80
06GPT-4oAPIOpenAIOct 2024SWE-bench Verified78.50
07GPT-4VMar 2023GPT-4 Technical Report77.20
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

4 steps
of state of the art.

Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · accuracy
  1. Jan 30, 2023BLIP-2Salesforce82.19
  2. Feb 15, 2024Gemini 1.5 ProGoogle86.50
  3. Apr 25, 2024InternVL2-76BShanghai AI Lab87.20
  4. Sep 18, 2024Qwen2-VL 72BAlibaba87.60
Fig 3 · SOTA-setting models only. 4 entries span Jan 2023 Sep 2024.
§ 04 · Literature

7 papers
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
VQA v2.0 — Visual Question Answering benchmark · Codesota | CodeSOTA