Codesota · Computer Vision · Image generation · ICE-Bench (Task1-31 Overall)Tasks/Computer Vision/Image generation
Image generation · benchmark dataset · EN

ICE-Bench: A Unified and Comprehensive Benchmark for Image Creating and Editing.

ICE-Bench (ICE = Image Creating and Editing) is a unified, multi-task benchmark for evaluating image generation and image editing models. Introduced in the paper “ICE-Bench: A Unified and Comprehensive Benchmark for Image Creating and Editing” (arXiv:2503.14482), it decomposes image creation/editing into four coarse categories (no-reference / reference × creating / editing) and further into 31 fine-grained tasks (Task 1–31). The benchmark uses a multi-dimensional evaluation protocol spanning 6 evaluation dimensions and 11 automatic metrics that measure imaging quality, prompt following, source consistency, reference consistency, controllability, and aesthetics. The authors provide benchmark code to compute per-task scores and an overall “Task1-31” aggregate score; the dataset and automated evaluation code are released (MIT license) on Hugging Face (ali-vilab/ICE-Bench).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies