SuperGPQA is a large multiple-choice question benchmark for evaluating LLM knowledge and reasoning across 285 graduate-level disciplines. The public dataset (HF: m-a-p/SuperGPQA) contains ~26.5K question instances (train split) and was constructed to include at least 50 questions per discipline. Each example includes fields such as question, options, answer (and answer_letter), discipline/field/subfield labels, difficulty, and an is_calculation flag. The benchmark was released with an open-data license (ODC-BY) and is intended for evaluation of LLM factual knowledge and problem solving across highly specialized academic and professional subject areas.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.