DoTA (Document image machine Translation dataset of ArXiv articles in markdown format) is a large-scale dataset of document-image → translation pairs introduced for document image machine translation (DIMT). It was created from arXiv articles rendered in markdown format and is intended to evaluate translation of long-context, complex-layout document images (e.g., whole pages with tables/figures/sections) into markdown-formatted target text. The NAACL 2024 paper reports a filtered set of about 126K image–translation pairs; the authors also provide an unfiltered collection of ~139K samples in the public repository/dataset. The dataset includes multilingual content (source English and target Chinese for the en→zh subset used in evaluations; the dataset metadata indicates other language variants are present) and is distributed under an MIT license on Hugging Face (the Hugging Face dataset is gated and requires agreeing to access conditions).
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | COMET |
|---|---|---|---|---|---|
| 01 | HunyuanOCR (1B) | — | Nov 2025 | HunyuanOCR Technical Report · code | 83.48 |
Each row below marks a model that broke the previous record on COMET. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.