Codesota · General · Vision-Language Models · OmniBenchTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

OmniBench.

OmniBench is a tri-modal (audio + image + text) benchmark designed to evaluate omni-language / cross-modal models' ability to recognize, interpret, and reason across visual, acoustic and textual inputs simultaneously. The benchmark collects multi-modal QA-style examples covering diverse task types (e.g., action/activity recognition, multi-modal question answering). The Hugging Face dataset card (m-a-p/OmniBench) shows the dataset as a single split with ~1.14k rows and a schema including fields such as task type, question, options, answer, audio/image content and file paths; the HF dataset is provided in parquet format and tagged with modalities audio, image, and text. The paper (arXiv:2409.15272) and project page describe the benchmark, motivations, and evaluation protocol.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies