Codesota · Natural Language Processing · Text Classification · SuperGLUETasks/Natural Language Processing/Text Classification
Text Classification · benchmark dataset · 2019 · EN

SuperGLUE.

More difficult successor to GLUE with 8 challenging tasks. Designed to be hard for current models.

Paper Download datasetSubmit a result
§ 01 · Leaderboard

Best published scores.

7 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
average-score · higher is better
average-score· primary
7 rows
#ModelOrgSubmittedPaper / codeaverage-score
01DeBERTa-v3-largeOSSMicrosoftNov 2021DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Tra…91.40
02ST-MoE-32BOSSGoogle BrainFeb 2022ST-MoE: Designing Stable and Transferable Sparse Expert …91.20
03GPT-4oAPIOpenAIMar 2023GPT-4 Technical Report90.30
04Gemini UltraGoogle DeepMindDec 2023Gemini: A Family of Highly Capable Multimodal Models90
05PaLM 2 (Large)GoogleMay 2023PaLM 2 Technical Report87.30
06Llama 3.1 405BOSSMetaJul 2024The Llama 3 Herd of Models86.70
07Qwen2 72BAlibabaJul 2024Qwen2 Technical Report85.40
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on average-score. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · average-score
  1. Nov 18, 2021DeBERTa-v3-largeMicrosoft91.40
Fig 3 · SOTA-setting models only. 1 entries span Nov 2021 Nov 2021.
§ 04 · Literature

7 papers
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies