Codesota · Models · E5-Mistral-7B-instructMicrosoft3 results · 3 benchmarks
Model card

E5-Mistral-7B-instruct.

Microsoftopen-source7B paramsMistral-7B (LLM-based embedding)

Improving Text Embeddings with LLMs. ICLR 2025.

§ 01 · Benchmarks

Every benchmark E5-Mistral-7B-instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01STS BenchmarkNatural Language Processing · Semantic Textual Similarityspearman84.7%#2/32024-01-01source ↗
02BEIRNatural Language Processing · Text Rankingndcg@1056.9%#3/42024-01-01source ↗
03MTEB LeaderboardNatural Language Processing · Feature Extractionavg-score66.6%#4/62024-01-01source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where E5-Mistral-7B-instruct actually performs.

Natural Language Processing
3
benchmarks
avg rank #3.0
§ 03 · Papers

1 paper with results for E5-Mistral-7B-instruct.

  1. 2024-01-01· Natural Language Processing· 3 results

    Improving Text Embeddings with Large Language Models

§ 04 · Related models

Other Microsoft models scored on Codesota.

RAD-DINO
2 results · 1 SOTA
NaturalSpeech 3
~500M params · 1 result · 1 SOTA
Swin Transformer V2 Large
197M params · 1 result · 1 SOTA
WavLM Large (SV)
316M params · 1 result · 1 SOTA
ResNet-50
25M params · 3 results
Florence-2-Large
2 results
KOSMOS-2.5
2 results
ResNet-152
60M params · 2 results
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.