Codesota · Benchmark · MATHHome/Leaderboards/Language & Knowledge/Language Modeling/MATH
Unknown

MATH.

MATH is a benchmark dataset of challenging competition-level mathematics problems introduced by Hendrycks et al. (NeurIPS Datasets & Benchmarks / arXiv 2103.03874). The dataset contains about 12,500 problems drawn from math competitions and is annotated with full step-by-step solutions (expressed in LaTeX and natural language) and final answers. Problems are organized by subject (e.g., algebra, counting & probability, geometry, number theory, precalculus) and difficulty level and are commonly distributed as a ~12,000-example training set plus a 500-example test set in public conversions. MATH is intended to evaluate and train models on mathematical problem solving and derivation generation (reasoning) and has been widely used as a benchmark for LLM math reasoning.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Only 1 model on this benchmark
Help build the community leaderboard — submit your model results.

Accuracy

Accuracy is the reported evaluation metric for MATH. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for Accuracyverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01Qwen2.5-Plus
dataset: MATH; task: 5
paper84.7N/ASource ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Language Modeling