Codesota · General · Vision-Language Models · TextVQATasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

TextVQA.

TextVQA is a dataset for visual question answering (VQA) that requires models to read and reason about text within images to answer questions. It contains 45,336 questions over 28,408 images, specifically designed for tasks where questions require understanding scene text in the given image. The dataset uses VQA accuracy for evaluation.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies