Codesota · General · Coding Agents · SciCodeTasks/General/Coding Agents
Coding Agents · benchmark dataset · EN

SciCode: A Research Coding Benchmark Curated by Scientists.

SciCode is a scientist-curated benchmark for evaluating language models on real-world scientific programming problems. The benchmark contains a collection of research-level Python coding problems (reported as 65 main problems spanning Chemistry, Materials Science, Biology, Math, and Physics), where each main problem is further divided into sub-problems that all must be solved correctly to obtain the main result. The benchmark provides gold-standard reference implementations and datasets for verifying calculations, example problems, and a leaderboard; it supports evaluation with optional detailed "background" context to give models the relevant theory and mathematical setup. SciCode was introduced in the paper "SciCode: A Research Coding Benchmark Curated by Scientists" (arXiv:2407.13168) and is maintained on GitHub and the project website; an official Hugging Face dataset mirror is also available.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies