Pick a benchmark, paste the model name + score, link the paper or code repo, and submit. A human reviews every entry — usually within 48 hours. Accepted scores land on the public leaderboard with your username attached.
Looking for something else? Use /submit for paper or page suggestions, or open the GitHub repo for editorial pull requests.
You get an email confirming receipt. The submission shows up as pending on your dashboard.
A human (currently k.wikiel@) cross-checks the paper/code, the metric direction, and whether the number reproduces under the published methodology.
Accepted entries land on the public leaderboard with your handle and submission link. Rejected entries get a one-line note explaining why so you can iterate.
Your username appears next to the row, the chart annotation cites the submission, and the source URL is preserved permanently.