Codesota · Natural Language Processing · Semantic Textual Similarity · STS BenchmarkTasks/Natural Language Processing/Semantic Textual Similarity
Semantic Textual Similarity · benchmark dataset · 2017 · EN

STS Benchmark.

Semantic textual similarity with human-annotated sentence pairs

Submit a result
§ 01 · Leaderboard

Best published scores.

3 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
spearman · higher is better
spearman· primary
3 rows
#ModelOrgSubmittedPaper / codespearman
01GTE-Qwen2-7B-instructOSSAlibabaJun 2024arxiv88.40
02E5-Mistral-7B-instructOSSMicrosoftJan 2024Improving Text Embeddings with Large Language Models84.70
03all-MiniLM-L6-v2OSSSentence-TransformersJan 2022arxiv82.80
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

3 steps
of state of the art.

Each row below marks a model that broke the previous record on spearman. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · spearman
  1. Jan 1, 2022all-MiniLM-L6-v2Sentence-Transformers82.80
  2. Jan 1, 2024E5-Mistral-7B-instructMicrosoft84.70
  3. Jun 16, 2024GTE-Qwen2-7B-instructAlibaba88.40
Fig 3 · SOTA-setting models only. 3 entries span Jan 2022 Jun 2024.
§ 04 · Literature

1 paper
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies