HumanEval-X is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation. The dataset contains coding problems in these five programming languages. The data fields include task_id (indicating the target language and problem ID) and prompt (the function declaration and docstring for code generation).
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Pass@1 |
|---|---|---|---|---|---|
| 01 | Qwen2.5-Plus | — | Dec 2024 | Qwen2.5 Technical Report · code | 87.80 |
Each row below marks a model that broke the previous record on Pass@1. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.