Unknown
beir is a state-of-the-art machine learning benchmark indexed on Codesota. This page tracks published model results, top scores per metric, and the SOTA timeline for beir.
Only 4 models on this benchmark
Help build the community leaderboard — submit your model results.
Higher is better
| Rank | Model | Source | Score | Year | Paper |
|---|---|---|---|---|---|
| 1 | NV-Embed-v2 NV-Embed-v2 average nDCG@10 on BEIR (15 datasets). Rank #1 retrieval on MTEB leaderboard. | Community | 62.65 | 2024 | Source |
| 2 | GTE-Qwen2-7B-instruct GTE-Qwen2-7B average nDCG@10 on BEIR (MTEB retrieval sub-task). From HF model card. | Community | 60.25 | 2024 | Source |
| 3 | E5-Mistral-7B-instruct E5-Mistral-7B average nDCG@10 on BEIR 15 datasets, from paper Table 1. | Community | 56.9 | 2024 | Source |
| 4 | ColBERTv2 ColBERTv2 average nDCG@10 on BEIR 18 datasets. From original BEIR / ColBERTv2 papers. | Community | 49.4 | 2022 | Source |