Codesota · General · Retrieval · BEIRTasks/General/Retrieval
Retrieval · benchmark dataset · EN

BEIR — Benchmarking-IR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models.

BEIR (Benchmarking-IR) is a heterogeneous, zero-shot information retrieval benchmark that consolidates 18 publicly available datasets from diverse retrieval tasks and domains (e.g., fact-checking, question-answering, biomedical IR, news retrieval, argument retrieval, duplicate question retrieval, citation prediction, tweets). It provides a common evaluation framework for IR models (lexical, sparse, dense, late-interaction, re-ranking) and is commonly reported using metrics such as nDCG@10 (average across datasets), MRR and recall. The BEIR code and data are available from the project GitHub and the Hugging Face dataset hub.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
BEIR — Retrieval benchmark · Codesota | CodeSOTA