Model card
E5-Mistral-7B-instruct.
Microsoftopen-source7B paramsMistral-7B (LLM-based embedding)
Improving Text Embeddings with LLMs. ICLR 2025.
§ 01 · Benchmarks
Every benchmark E5-Mistral-7B-instruct has a recorded score for.
| # | Benchmark | Area · Task | Metric | Value | Rank | Date | Source |
|---|---|---|---|---|---|---|---|
| 01 | STS Benchmark | Natural Language Processing · Semantic Textual Similarity | spearman | 84.7% | #2 | 2024-01-01 | source ↗ |
| 02 | BEIR | Natural Language Processing · Text Ranking | ndcg@10 | 56.9% | #3 | 2024-01-01 | source ↗ |
| 03 | MTEB Leaderboard | Natural Language Processing · Feature Extraction | avg-score | 66.6% | #4 | 2024-01-01 | source ↗ |
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area
Where E5-Mistral-7B-instruct actually performs.
§ 03 · Papers
1 paper with results for E5-Mistral-7B-instruct.
- 2024-01-01· Natural Language Processing· 3 results
Improving Text Embeddings with Large Language Models
§ 04 · Related models
Other Microsoft models scored on Codesota.
§ 05 · Sources & freshness
Where these numbers come from.
arxiv
3
results
3 of 3 rows marked verified.