MLDR (Multilingual Long-Document Retrieval) is a long-document retrieval benchmark intended for evaluating embedding and retrieval models on lengthy texts. The dataset samples lengthy articles from Wikipedia, Wudao and mC4 across 13 typologically diverse languages, then randomly selects paragraphs and uses GPT-3.5 to generate questions based on those paragraphs; each generated question paired with its sampled article forms a retrieval example. The full multilingual release contains on the order of 200,000 long documents; papers and implementations that cite MLDR sometimes evaluate an English-only subset (the “English subset”) for in-domain (fine-tuned) and out-of-domain (no finetuning) retrieval, reporting metrics such as nDCG@10. Source: Hugging Face dataset page (Shitao/MLDR) and related project docs (e.g., BGE evaluation docs, third-party benchmarks).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.