A straightforward and easy-to-reproduce evaluation benchmark for large language models focused on instruction-following capabilities. IFEval contains around 500 prompts (541 in the train split) with verifiable instructions that can be objectively evaluated by heuristics, such as "write in more than 400 words", "mention the keyword of AI at least 3 times", "use no commas", or "include at least 3 highlighted sections". The benchmark identifies 25 types of verifiable instructions including punctuation constraints, length requirements, detectable content/format requirements, and keyword usage. Each prompt contains one or more verifiable instructions with corresponding kwargs for verification. This benchmark is designed for evaluating chat or instruction fine-tuned language models and is one of the core benchmarks used in the Open LLM Leaderboard.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-Plus | — | Dec 2024 | Qwen2.5 Technical Report · code | 86.30 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.