MATH is a benchmark dataset of challenging competition-level mathematics problems introduced by Hendrycks et al. (NeurIPS Datasets & Benchmarks / arXiv 2103.03874). The dataset contains about 12,500 problems drawn from math competitions and is annotated with full step-by-step solutions (expressed in LaTeX and natural language) and final answers. Problems are organized by subject (e.g., algebra, counting & probability, geometry, number theory, precalculus) and difficulty level and are commonly distributed as a ~12,000-example training set plus a 500-example test set in public conversions. MATH is intended to evaluate and train models on mathematical problem solving and derivation generation (reasoning) and has been widely used as a benchmark for LLM math reasoning.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-Plus | — | Dec 2024 | Qwen2.5 Technical Report · code | 84.70 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.