Visual Question Answering dataset requiring models to read and reason about text in natural images. Contains 45,336 questions about 28,408 images from Open Images dataset. Questions require OCR-based reasoning, e.g. "What does the sign say?". A standard benchmark for evaluating text understanding within visual scenes. ANLS and exact-match accuracy metrics.
9 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-VL 72BOSS | Alibaba | Feb 2025 | Qwen2.5-VL Technical Report | 85.50 |
| 02 | Qwen2-VL 72BOSS | Alibaba | Sep 2024 | Qwen2-VL: Enhancing Vision-Language Model's Perception o… | 84.90 |
| 03 | InternVL2-76BOSS | Shanghai AI Lab | Apr 2024 | InternVL: Scaling up Vision Foundation Models and Aligni… | 84.40 |
| 04 | Llama 3.2 Vision 90BOSS | Meta | Jul 2024 | The Llama 3 Herd of Models | 83.40 |
| 05 | Gemini 1.5 ProAPI | Feb 2024 | Gemini 1.5: Unlocking multimodal understanding across mi… | 82.20 | |
| 06 | GPT-4V | — | Mar 2023 | GPT-4 Technical Report | 78 |
| 07 | GPT-4oAPI | OpenAI | Oct 2024 | SWE-bench Verified | 77.40 |
| 08 | LLaVA-1.5OSS | UW-Madison / Microsoft | Oct 2023 | Improved Baselines with Visual Instruction Tuning (LLaVA… | 61.30 |
| 09 | BLIP-2OSS | Salesforce | Jan 2023 | BLIP-2: Bootstrapping Language-Image Pre-training with F… | 42.50 |
Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.