OmniContext is a small subject-driven any-to-image / image-to-image benchmark (400 examples) released as part of the OmniGen2 project to evaluate in-context image generation. The benchmark contains diverse input images and natural-language instructions and is organized into per-setting categories (reported in evaluations as SINGLE / MULTIPLE / SCENE) with per-setting and average scores. Evaluation is automated using an LLM-based, interpretable metric pipeline (the dataset page cites GPT-4.1 for metric-driven assessment). The Hugging Face dataset provides a single split (train, 400 rows) and fields such as task_type, instruction, and input_images; license: Apache-2.0. Project resources and code are available from the OmniGen2 project (GitHub) and the dataset page on Hugging Face.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.