Codesota · General · Coding Agents · CRUX-OTasks/General/Coding Agents
Coding Agents · benchmark dataset · EN

CRUXEval-O (CRUXEval output-prediction subset).

CRUXEval-O (also referenced as CRUX-O) is the output-prediction task/subset of CRUXEval, a benchmark for code reasoning, understanding, and execution. CRUXEval contains 800 Python functions (each 3–13 lines) sampled/generated as described in the paper; each function is paired with an input and its correct output. The CRUXEval-O task requires models to predict the correct output given a Python function and its input(s), measuring a model’s ability to reason about program execution and produce correct outputs (complementary to CRUXEval-I which targets input prediction). The benchmark is intended to evaluate execution-level reasoning beyond standard code-generation benchmarks (e.g., HumanEval, MBPP). The dataset and evaluation code are released under the MIT license.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies