Codesota · Natural Language Processing · Text classification · GLUE (dev)Tasks/Natural Language Processing/Text classification
Text classification · benchmark dataset · EN

General Language Understanding Evaluation (GLUE).

GLUE (General Language Understanding Evaluation) is a widely-used benchmark suite for evaluating natural language understanding (NLU) systems. It aggregates nine sentence- or sentence-pair tasks drawn from established datasets — CoLA, SST-2, MRPC, STS-B, QQP, MNLI (matched/mismatched), QNLI, RTE, and WNLI — and also includes a hand-crafted diagnostic set (AX) for fine-grained linguistic analysis. The benchmark defines standard training/validation/test splits and an aggregate score (commonly reported on the dev or test sets) to summarize overall NLU performance; many papers report the GLUE dev-set aggregated score to compare models. GLUE was introduced in the paper “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding” (Wang et al., 2018) and is hosted at the GLUE website and in major dataset libraries (Hugging Face, TensorFlow Datasets).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies