Codesota · Models · GTE-Qwen2-7B-instructAlibaba3 results · 3 benchmarks
Model card

GTE-Qwen2-7B-instruct.

Alibabaopen-source7B paramsQwen2-7B (LLM-based embedding)

Rank #1 on MTEB English + Chinese as of June 2024.

§ 01 · Benchmarks

Every benchmark GTE-Qwen2-7B-instruct has a recorded score for.

#BenchmarkArea · TaskMetricValueRankDateSource
01STS BenchmarkNatural Language Processing · Semantic Textual Similarityspearman88.4%#1/3source ↗
02BEIRNatural Language Processing · Text Rankingndcg@1060.3%#2/4source ↗
03MTEB LeaderboardNatural Language Processing · Feature Extractionavg-score72.0%#2/6source ↗
Rank column shows this model’s position vs all other models scored on the same benchmark + metric (competitors after the slash). #1 in red means current SOTA. Sorted by rank, then newest result.
§ 02 · Strengths by area

Where GTE-Qwen2-7B-instruct actually performs.

Natural Language Processing
3
benchmarks
avg rank #1.7
§ 04 · Related models

Other Alibaba models scored on Codesota.

Qwen2-VL 72B
4 results
Qwen2.5-72B-Instruct
72B params · 4 results
Qwen2.5-Coder 32B
32B params · 4 results
GOT-OCR2.0
3 results
Qwen 3 72B
72B params · 2 results
Qwen2.5-VL 32B
2 results
Qwen2.5-VL 72B
72B params · 2 results
Qwen 3 14B
14B params · 1 result
§ 05 · Sources & freshness

Where these numbers come from.

arxiv
3
results
3 of 3 rows marked verified.