KRIS-Bench (Knowledge-based Reasoning in Image-editing Systems Benchmark) is a diagnostic benchmark for instruction-driven image editing that focuses on models' knowledge-based reasoning rather than only visual fidelity. It organizes editing tasks into a cognitively informed taxonomy of knowledge types (Factual, Conceptual, Procedural) and defines 22 representative editing tasks designed to probe different forms of knowledge reasoning in image editing. KRIS-Bench provides per-task sub-metrics and a composite Knowledge Plausibility metric; the authors also report an overall score and several sub-scores evaluated using a large LLM annotator (reported using GPT-4o). The dataset is released in Parquet format and combines image inputs with natural-language editing instructions, knowledge-based explanations, and ground-truth edited images. Typical fields include: category (task category), id (sample id), instruction (editing instruction text), explanation (knowledge-based explanation), image (input image) and gt_image (ground-truth edited image). The publicly available dataset on Hugging Face contains ~1.27k samples (single split) and is released under a permissive license (dataset replicas on HF list CC-BY-4.0 / Apache-2.0 variants).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.