Codesota · Benchmark · GPQAHome/Leaderboards/Language & Knowledge/Language Modeling/GPQA
Unknown

GPQA.

GPQA science QA benchmark; reported as Avg@8 in the paper.

Paper Leaderboard
§ 01 · SOTA history

Year over year.

Not enough data to show trend.
§ 02 · Leaderboard

Results by metric.

Only 1 model on this benchmark
Help build the community leaderboard — submit your model results.

Accuracy

Accuracy is the reported evaluation metric for GPQA. Codesota tracks published model scores on this metric so readers can compare state-of-the-art results across sources and model families.

Higher is better

Trust tiers for Accuracyverifiedpapervendorcommunityunverified
RankModelTrustScoreYearSource
01Qwen2.5-Plus
dataset: GPQA; task: 5
paper49.7N/ASource ↗
§ 04 · Submit a result

Add to the leaderboard.

← Back to Language Modeling