More difficult successor to GLUE with 8 challenging tasks. Designed to be hard for current models.
7 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | average-score |
|---|---|---|---|---|---|
| 01 | DeBERTa-v3-largeOSS | Microsoft | Nov 2021 | DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Tra… | 91.40 |
| 02 | ST-MoE-32BOSS | Google Brain | Feb 2022 | ST-MoE: Designing Stable and Transferable Sparse Expert … | 91.20 |
| 03 | GPT-4oAPI | OpenAI | Mar 2023 | GPT-4 Technical Report | 90.30 |
| 04 | Gemini Ultra | Google DeepMind | Dec 2023 | Gemini: A Family of Highly Capable Multimodal Models | 90 |
| 05 | PaLM 2 (Large) | May 2023 | PaLM 2 Technical Report | 87.30 | |
| 06 | Llama 3.1 405BOSS | Meta | Jul 2024 | The Llama 3 Herd of Models | 86.70 |
| 07 | Qwen2 72B | Alibaba | Jul 2024 | Qwen2 Technical Report | 85.40 |
Each row below marks a model that broke the previous record on average-score. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.