Codesota · General · Retrieval · MLDR (English subset)Tasks/General/Retrieval
Retrieval · benchmark dataset · EN

MLDR (Multilingual Long-Document Retrieval) — English subset.

MLDR (Multilingual Long-Document Retrieval) is a long-document retrieval benchmark intended for evaluating embedding and retrieval models on lengthy texts. The dataset samples lengthy articles from Wikipedia, Wudao and mC4 across 13 typologically diverse languages, then randomly selects paragraphs and uses GPT-3.5 to generate questions based on those paragraphs; each generated question paired with its sampled article forms a retrieval example. The full multilingual release contains on the order of 200,000 long documents; papers and implementations that cite MLDR sometimes evaluate an English-only subset (the “English subset”) for in-domain (fine-tuned) and out-of-domain (no finetuning) retrieval, reporting metrics such as nDCG@10. Source: Hugging Face dataset page (Shitao/MLDR) and related project docs (e.g., BGE evaluation docs, third-party benchmarks).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies