Codesota · Natural Language Processing · Language Modeling · HiddenMathTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

HiddenMath.

HiddenMath is reported to be a hidden/internal benchmark of competition-style mathematics problems used to evaluate large language models. Publicly-available evidence is limited: an LLM benchmark listing (LLMDB) describes HiddenMath as "Google’s internal holdout set of competition math problems" and reports scores on a 0–100 accuracy scale. No public dataset release, Hugging Face dataset page, or dedicated paper was found; the dataset appears to be a private/held-out test set used in model evaluation (reported in Gemma 3 Technical Report Table 6 as "HiddenMath", metric = accuracy). Source: LLMDB entry for HiddenMath (https://llmdb.com/benchmarks/hiddenmath).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
HiddenMath — Language Modeling benchmark · Codesota | CodeSOTA