Codesota · General · Video-Language Models · CG-BenchTasks/General/Video-Language Models
Video-Language Models · benchmark dataset · ENGLISH

CG-Bench: A Comprehensive Benchmark for Computer Graphics Understanding.

CG-Bench is a comprehensive benchmark for evaluating Large Multimodal Models (LMMs) on computer graphics understanding tasks. The benchmark includes various computer graphics scenarios and tasks that test models ability to understand and reason about computer-generated visual content, including 3D graphics, rendering, animation, and visual effects. The dataset is designed to evaluate how well models can comprehend and analyze computer graphics content across different domains and complexity levels.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies