LV-Eval is a bilingual long-context benchmark designed to evaluate large language models at very large context lengths (up to 256k tokens). It provides controllable evaluation across five length levels (16k, 32k, 64k, 128k, 256k) and includes multiple QA-style tasks (single-hop and multi-hop QA) drawn from several bilingual datasets. The benchmark incorporates techniques to reduce knowledge leakage and increase difficulty and objectivity: confusing facts insertion (CFI), keyword and phrase replacement (KPR), and a keyword-recall-based metric evaluated at multiple lengths. LV-Eval is provided with balanced numbers of instances across lengths and is intended to stress-test long-context capabilities of LLMs.
1 result indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.
| # | Model | Org | Submitted | Paper / code | Accuracy |
|---|---|---|---|---|---|
| 01 | Qwen2.5-72B-Instruct | — | Dec 2024 | Qwen2.5 Technical Report · code | 60.40 |
Each row below marks a model that broke the previous record on Accuracy. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.
Higher scores win. Each subsequent entry improved upon the previous best.
Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.