Codesota · Natural Language Processing · Language Modeling · OpenRewrite-EvalTasks/Natural Language Processing/Language Modeling
Language Modeling · benchmark dataset · EN

OPENREWRITEEVAL (OpenRewriteEval).

OPENREWRITEEVAL (OpenRewriteEval) is a benchmark for evaluating long-form, open-ended text rewriting by large language models. It covers a wide variety of rewriting types expressed through natural-language instructions and is designed to measure content preservation and to detect hallucinations or unintended modifications introduced by models when rewriting long-form text. The Hugging Face reupload (gabrielmbmb/OpenRewriteEval) contains a single split (train) with ~1.63k examples; fields include source (original long-form text), target (desired rewritten text), comment, and a task label with 6 classes (different rewriting types). The HF dataset page notes it was reuploaded from the original RewriteLM GitHub repository for convenience.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies