Multilingual Grade School Math (MGSM) is a multilingual benchmark of grade-school math word problems introduced in the paper “Language Models are Multilingual Chain-of-Thought Reasoners” (arXiv:2210.03057). It contains the same 250 problems from GSM8K, each manually translated into 10 typologically diverse languages (Spanish, French, German, Russian, Chinese, Japanese, Thai, Swahili, Bengali, Telugu) plus English. MGSM is used to evaluate multilingual reasoning and chain-of-thought capabilities of language models (includes inputs, targets, and manually translated few-shot exemplars). License: CC BY-SA 4.0. Size: 250 problems × languages (1K<n<10K overall). Note: referenced as MGS / MGSM in some papers (reported in pre-training comparisons).
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | — | Dec 2024 | Qwen2.5 Technical Report · code | 88.16 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.