Codesota · Natural Language Processing · Language Modeling · EvalPlusTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

EvalPlus.

EvalPlus is an evaluation framework and leaderboard for LLMs on code-generation tasks (LLM4Code). The EvalPlus project provides rigorously extended test suites for popular coding benchmarks (notably HumanEval+ and MBPP+) and tooling to evaluate models (pass@1, chat vs completion, etc.). HumanEval+ and MBPP+ are enlarged, hand-verified test sets (HumanEval+ ~80x more tests than original HumanEval; MBPP+ ~35x more tests than original MBPP) maintained by the EvalPlus team. In the NeurIPS paper “Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation” (arXiv:2305.01210) the authors report an aggregate coding score referred to as “EvalPlus” (used in e.g., Table 3) which is computed from the constituent benchmarks (HumanEval, MBPP, HumanEval+, MBPP+). Primary sources: EvalPlus GitHub & website (https://github.com/evalplus, https://evalplus.github.io/leaderboard.html), Hugging Face dataset pages for the extended datasets (HumanEval+: https://huggingface.co/datasets/evalplus/humanevalplus , MBPP+: https://huggingface.co/datasets/evalplus/mbppplus), and the NeurIPS / arXiv paper (arXiv:2305.01210).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies