Codesota · Computer Vision · Image generation · GenEvalTasks/Computer Vision/Image generation
Image generation · benchmark dataset · EN

GenEval: An Object-Focused Framework for Evaluating Text-to-Image Alignment.

GenEval is an object-focused evaluation framework for text-to-image alignment that enables fine-grained, instance-level evaluation of compositional generation. Instead of holistic metrics like FID or CLIPScore, GenEval evaluates object-level properties such as object co-occurrence, spatial relations/position, object count, and attribute binding (e.g., color). The framework leverages off-the-shelf object detectors and other discriminative vision models to build automated, verifiable evaluators that correlate well with human judgments. The authors provide code, evaluation scripts, and benchmark prompts/tasks (repository: https://github.com/djghosh13/geneval, MIT license) to run GenEval evaluators against text-to-image models and to report per-task scores for multi-object composition and related evaluations.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies