Arena-Hard is a human-aligned benchmark of challenging open-ended prompts sourced from live crowd platforms (notably Chatbot Arena) designed to robustly separate LLM capability and reflect human preference. It was introduced in the paper “From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline” (arXiv:2406.11939). The Arena-Hard-Auto variant (published on Hugging Face as Arena-Hard-Auto / v0.1) is an automatic evaluation suite that contains 500 challenging user queries extracted from Chatbot Arena and uses an LLM-as-a-judge (the dataset authors report prompting GPT-4-Turbo to act as judge, comparing model responses against a baseline such as GPT-4-0314). The BenchBuilder pipeline described in the paper automates extracting high-quality prompts from crowdsourced data and producing an automatically-judged benchmark with high correlation and separability relative to the live Chatbot Arena. Common uses: automatic and human-aligned evaluation of instruction-tuned LLMs and benchmarking alignment/safety/helpfulness.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-Plus | — | Dec 2024 | Qwen2.5 Technical Report · code | 81.40 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.