Codesota · Computer Vision · Few-Shot Image Classification · HELMETTasks/Computer Vision/Few-Shot Image Classification
Few-Shot Image Classification · benchmark dataset · EN

HELMET: How to Evaluate Long-context Language Models Effectively and Thoroughly.

HELMET (How to Evaluate Long-context Language Models Effectively and Thoroughly) is a comprehensive benchmark for evaluating long-context language models (LCLMs). It comprises seven diverse, application-centric categories designed to test models' ability to process long inputs at multiple controllable lengths (paper reports support up to 128k tokens) and uses reliable, task-appropriate metrics and few-shot prompting. The benchmark is accompanied by code and data (available from the Princeton-NLP GitHub) and is hosted as a Hugging Face dataset (princeton-nlp/HELMET). HELMET was published as an ICLR 2025 paper (arXiv:2410.02694 / OpenReview entry) and is intended for evaluating LMs' long-input understanding and processing rather than generation-only tasks.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies