GenEval is an object-focused evaluation framework for text-to-image alignment that enables fine-grained, instance-level evaluation of compositional generation. Instead of holistic metrics like FID or CLIPScore, GenEval evaluates object-level properties such as object co-occurrence, spatial relations/position, object count, and attribute binding (e.g., color). The framework leverages off-the-shelf object detectors and other discriminative vision models to build automated, verifiable evaluators that correlate well with human judgments. The authors provide code, evaluation scripts, and benchmark prompts/tasks (repository: https://github.com/djghosh13/geneval, MIT license) to run GenEval evaluators against text-to-image models and to report per-task scores for multi-object composition and related evaluations.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.