DocVQA is a dataset for Visual Question Answering (VQA) on document images. It consists of 50,000 questions defined on over 12,000 document images, covering various document types with textual, graphical, and structural elements like tables, forms, and figures. The document images are sourced from the UCSF Industry Documents Library and include a mix of printed, typewritten, and handwritten content, such as letters, memos, notes, and reports. The dataset is split into a training set (39,463 questions, 10,194 images), a validation set (5,349 questions, 1,286 images), and a test set (5,188 questions, 1,287 images).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.