HiddenMath is reported to be a hidden/internal benchmark of competition-style mathematics problems used to evaluate large language models. Publicly-available evidence is limited: an LLM benchmark listing (LLMDB) describes HiddenMath as "Google’s internal holdout set of competition math problems" and reports scores on a 0–100 accuracy scale. No public dataset release, Hugging Face dataset page, or dedicated paper was found; the dataset appears to be a private/held-out test set used in model evaluation (reported in Gemma 3 Technical Report Table 6 as "HiddenMath", metric = accuracy). Source: LLMDB entry for HiddenMath (https://llmdb.com/benchmarks/hiddenmath).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.