GLUE (General Language Understanding Evaluation) is a widely-used benchmark suite for evaluating natural language understanding (NLU) systems. It aggregates nine sentence- or sentence-pair tasks drawn from established datasets — CoLA, SST-2, MRPC, STS-B, QQP, MNLI (matched/mismatched), QNLI, RTE, and WNLI — and also includes a hand-crafted diagnostic set (AX) for fine-grained linguistic analysis. The benchmark defines standard training/validation/test splits and an aggregate score (commonly reported on the dev or test sets) to summarize overall NLU performance; many papers report the GLUE dev-set aggregated score to compare models. GLUE was introduced in the paper “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding” (Wang et al., 2018) and is hosted at the GLUE website and in major dataset libraries (Hugging Face, TensorFlow Datasets).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.