Codesota · Models · RankLLaMA-7BCastorini (Waterloo)1 results · 1 benchmarks
Model card

RankLLaMA-7B.

Castorini (Waterloo)open-source7B paramsLLaMA-2-7B (pointwise reranker)

Fine-Tuning LLaMA for Multi-Stage Text Retrieval. arXiv 2310.08319.

§ 01 · Benchmarks

Every benchmark RankLLaMA-7B has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01MS MARCONatural Language Processing · Text Rankingmrr@1041.8%#1/42023-10-12source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where RankLLaMA-7B actually performs.

Natural Language Processing
1
benchmark
avg rank #1.0
§ 03 · Papers

1 paper with results for RankLLaMA-7B.

  1. 2023-10-12· Natural Language Processing· 1 result

    Fine-Tuning LLaMA for Multi-Stage Text Retrieval

§ 04 · Related models

Other Castorini (Waterloo) models scored on Codesota.

MonoT5-3B
3B params · 0 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
1
result
1 of 1 rows marked verified.