Codesota · Computer Vision · Image editing · GEdit-BenchTasks/Computer Vision/Image editing
Image editing · benchmark dataset · EN

GEdit-Bench.

GEdit-Bench is a real-world image-editing evaluation benchmark released by the StepFun / Step1X-Edit team to assess image-editing models on authentic user instructions. The Hugging Face dataset contains ~1.21k examples (single split: train) of image + editing instruction pairs and metadata. The schema includes fields such as task_type (11 edit categories), key, instruction, instruction_language (en/zh), input_image / input_image_raw, and Intersection_exist. The benchmark was designed for automatic/LLM-based evaluation — the Step1X-Edit paper and project report model scores computed by GPT-4.1 (and comparisons to other graders such as Qwen2.5-VL). Dataset is hosted on Hugging Face (MIT license) and was introduced alongside the Step1X-Edit paper (arXiv:2504.17761).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
GEdit-Bench — Image editing benchmark · Codesota | CodeSOTA