Codesota · General · Vision-Language Models · DocVQATasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

DocVQA.

DocVQA is a dataset for Visual Question Answering (VQA) on document images. It consists of 50,000 questions defined on over 12,000 document images, covering various document types with textual, graphical, and structural elements like tables, forms, and figures. The document images are sourced from the UCSF Industry Documents Library and include a mix of printed, typewritten, and handwritten content, such as letters, memos, notes, and reports. The dataset is split into a training set (39,463 questions, 10,194 images), a validation set (5,349 questions, 1,286 images), and a test set (5,188 questions, 1,287 images).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies