TextVQA is a dataset for visual question answering (VQA) that requires models to read and reason about text within images to answer questions. It contains 45,336 questions over 28,408 images, specifically designed for tasks where questions require understanding scene text in the given image. The dataset uses VQA accuracy for evaluation.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.