Codesota · Natural Language Processing · Language Modeling · Arena-HardTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

Arena-Hard (Arena-Hard-Auto).

Arena-Hard is a human-aligned benchmark of challenging open-ended prompts sourced from live crowd platforms (notably Chatbot Arena) designed to robustly separate LLM capability and reflect human preference. It was introduced in the paper “From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline” (arXiv:2406.11939). The Arena-Hard-Auto variant (published on Hugging Face as Arena-Hard-Auto / v0.1) is an automatic evaluation suite that contains 500 challenging user queries extracted from Chatbot Arena and uses an LLM-as-a-judge (the dataset authors report prompting GPT-4-Turbo to act as judge, comparing model responses against a baseline such as GPT-4-0314). The BenchBuilder pipeline described in the paper automates extracting high-quality prompts from crowdsourced data and producing an automatically-judged benchmark with high correlation and separability relative to the live Chatbot Arena. Common uses: automatic and human-aligned evaluation of instruction-tuned LLMs and benchmarking alignment/safety/helpfulness.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
Accuracy · higher is better
Accuracy· primary
1 row
#ModelOrgSubmittedPaper / codeAccuracy
01Qwen2.5-PlusDec 2024Qwen2.5 Technical Report · code81.40
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · Accuracy
  1. Dec 19, 2024Qwen2.5-Plus81.40
Fig 3 · SOTA-setting models only. 1 entries span Dec 2024 Dec 2024.
§ 04 · Literature

1 paper
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

  • Qwen2.5 Technical Report
    Qwen:An YangBaosong YangBeichen ZhangBinyuan HuiBo ZhengBowen YuChengyuan LiDayiheng LiuFei HuangHaoran WeiHuan LinJian YangJianhong TuJianwei ZhangJianxin YangJiaxi YangJingren ZhouJunyang LinKai DangKeming LuKeqin BaoKexin YangLe YuMei LiMingfeng XuePei ZhangQin ZhuRui MenRunji LinTianHao LiTianyi TangTingyu XiaXingzhang RenXuancheng RenYang FanYang SuYichang ZhangYu WanYuqiong LiuZeyu CuiZhenru ZhangZihan Qiu
    Dec 2024·Qwen2.5-Plus
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies