Codesota · Reasoning · Mathematical Reasoning · MATHTasks/Reasoning/Mathematical Reasoning
Mathematical Reasoning · benchmark dataset · 2021 · EN

Mathematics Aptitude Test of Heuristics.

12,500 competition mathematics problems (5,000 test) from AMC, AIME, and other sources covering algebra, geometry, number theory, and more. Harder than GSM8K. Modern evaluations typically use the MATH-500 representative subset.

Paper Download datasetSubmit a result
§ 01 · Leaderboard

Best published scores.

34 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
accuracy · higher is better
accuracy· primary
34 rows
#ModelOrgSubmittedPaper / codeaccuracy
01o4-mini (high)APIOpenAIMar 2026openai-simple-evals98.20
02o3 (high)APIOpenAIMar 2026openai-simple-evals98.10
03o3-miniAPIOpenAIMar 2026openai-simple-evals97.90
04o3APIOpenAIMar 2026openai-simple-evals97.80
05o4-miniAPIOpenAIMar 2026openai-simple-evals97.50
06DeepSeek R1OSSDeepSeekMar 2026deepseek-paper97.30
07Gemini 2.5 ProAPIGoogleMar 2026artificialanalysis97.30
08o1APIOpenAIMar 2026openai-simple-evals96.40
09Kimi k1.5APIMoonshot AIMar 2026kimi-k15-paper96.20
10Claude 3.7 SonnetAPIAnthropicMar 2026anthropic-blog96.20
11DeepSeek-R1-ZeroOSSDeepSeekMar 2026deepseek-paper95.90
12DeepSeek-R1-Distill-Llama-70BOSSDeepSeekMar 2026deepseek-paper94.50
13DeepSeek-R1-Distill-Qwen-32BOSSDeepSeekMar 2026deepseek-paper94.30
14DeepSeek-v3-0324OSSDeepSeekMar 2026llm-stats94
15Claude Opus 4.5APIAnthropicMar 2026anthropic-model-card90.70
16QwQ-32BOSSAlibaba/QwenMar 2026llm-stats90.60
17DeepSeek-V3OSSDeepSeekMar 2026deepseek-paper90.20
18o1-miniAPIOpenAIMar 2026openai-simple-evals90
19Llama-4-MaverickOSSMetaMar 2026meta-blog89.40
20Claude Opus 4APIAnthropicMar 2026anthropic-model-card89.20
21Claude Sonnet 4APIAnthropicMar 2026anthropic-model-card88.90
22GPT-4.5 PreviewAPIOpenAIMar 2026openai-simple-evals87.10
23o1-previewAPIOpenAIMar 2026openai-simple-evals85.50
24Qwen2.5-72B-InstructOSSAlibabaMar 2026qwen25-tech-report83.10
25GPT-4.1APIOpenAIMar 2026openai-simple-evals82.10
26GPT-4oAPIOpenAIMar 2026openai-simple-evals76.60
27Grok 2APIxAIMar 2026openai-simple-evals76.10
28Llama 3.1 405BOSSMetaMar 2026openai-simple-evals73.80
29GPT-4 TurboAPIOpenAIMar 2026openai-simple-evals73.40
30Claude 3.5 SonnetAPIAnthropicMar 2026openai-simple-evals71.10
31GPT-4o miniOpenAIMar 2026openai-simple-evals70.20
32Llama 3.1 70BOSSMetaMar 2026openai-simple-evals68
33Gemini 1.5 ProAPIGoogleMar 2026google-blog67.70
34Claude 3 OpusAPIAnthropicMar 2026openai-simple-evals60.10
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · accuracy
  1. Mar 22, 2026o4-mini (high)OpenAI98.20
Fig 3 · SOTA-setting models only. 1 entries span Mar 2026 Mar 2026.
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies