SWE-Bench Pro is a challenging benchmark for evaluating LLMs/Agents on long-horizon software engineering tasks. Given a codebase and an issue, a language model is tasked with generating a patch that resolves the described problem. The dataset is used for language modeling. The public set contains 731 instances, and there is also a commercial set with 276 instances from private, proprietary codebases.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.