Crossmodal-3600 (XM3600) is an evaluation benchmark for massively multilingual image captioning. It contains 3,600 geographically-diverse images annotated with human-generated reference captions in 36 languages. The images were selected from regions where the target languages are spoken and captions were produced to be consistent in style across languages while avoiding direct translation artifacts. The dataset is intended for model selection and automatic evaluation of multilingual image-captioning systems (and has also been used as a golden reference in related image-text retrieval evaluations). The dataset and paper report experiments showing strong correlation between automatic metrics (using XM3600 as references) and human evaluations. Resources and metadata are available on the project page and a Hugging Face mirror.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.