Codesota · Computer Vision · Image editing · RISEBenchTasks/Computer Vision/Image editing
Image editing · benchmark dataset · EN

RISEBench (Reasoning-Informed viSual Editing Benchmark).

RISEBench (RISE: Reasoning-Informed viSual Editing Benchmark) is a benchmark and dataset for evaluating multimodal models on instruction-driven image editing tasks that require deeper reasoning. Introduced in the paper “Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing” (arXiv:2504.02826), RISEBench focuses on four reasoning categories — Temporal, Causal, Spatial and Logical reasoning — and provides expert-curated test cases for each. The benchmark pairs input images with complex editing instructions that require understanding scene context and reasoning beyond low-level appearance changes. The authors propose an evaluation framework measuring Instruction Reasoning, Appearance Consistency, and Visual Plausibility using both human judges and an “LMM-as-a-judge” protocol; they evaluate a range of open-source and proprietary LMMs (reporting results for systems such as GPT-4o / GPT-4o-Image in the paper). The project repository (GitHub) and Hugging Face dataset release include the data, evaluation scripts and example runs. Reported dataset scope in sources: 360 high-quality, human-expert curated test cases covering the four reasoning categories. Primary resources: arXiv paper (2504.02826), official GitHub (PhoenixZ810/RISEBench) and Hugging Face dataset page (PhoenixZ/RISEBench).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies