Model card
RankLLaMA-7B.
Castorini (Waterloo)open-source7B paramsLLaMA-2-7B (pointwise reranker)
Fine-Tuning LLaMA for Multi-Stage Text Retrieval. arXiv 2310.08319.
§ 01 · Benchmarks
Every benchmark RankLLaMA-7B has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | MS MARCO | Natural Language Processing · Text Ranking | mrr@10 | 41.8% | #1 | 2023-10-12 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where RankLLaMA-7B actually performs.
§ 03 · Papers
1 paper with results for RankLLaMA-7B.
- 2023-10-12· Natural Language Processing· 1 result
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
§ 04 · Related models
Other Castorini (Waterloo) models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
1
result
1 of 1 rows marked verified.