Codesota · General · Vision-Language Models · MM-VetTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities ("Multimodal Veterinarian").

MM-Vet (short for “Multimodal Veterinarian”) is an evaluation benchmark for large multimodal models (LMMs) that examines models on complex, integrated vision-language capabilities. The benchmark is designed around the insight that advanced multimodal abilities arise from integrating core vision-language capabilities: the authors define six core VL capabilities and evaluate 16 capability integrations of interest. MM‑Vet includes both open‑ended and closed QA style items, an LLM‑based evaluator for open‑ended answers, and aims to provide diagnostic insights beyond single-number rankings. The project provides code, data, and an online evaluator (GitHub) and a formatted dataset version used in the lmms-eval pipeline (Hugging Face). The Hugging Face formatted dataset includes fields such as question_id, image, question, answer, image_source, and capability.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
MM-Vet — Vision-Language Models benchmark · Codesota | CodeSOTA