MM-Vet (short for “Multimodal Veterinarian”) is an evaluation benchmark for large multimodal models (LMMs) that examines models on complex, integrated vision-language capabilities. The benchmark is designed around the insight that advanced multimodal abilities arise from integrating core vision-language capabilities: the authors define six core VL capabilities and evaluate 16 capability integrations of interest. MM‑Vet includes both open‑ended and closed QA style items, an LLM‑based evaluator for open‑ended answers, and aims to provide diagnostic insights beyond single-number rankings. The project provides code, data, and an online evaluator (GitHub) and a formatted dataset version used in the lmms-eval pipeline (Hugging Face). The Hugging Face formatted dataset includes fields such as question_id, image, question, answer, image_source, and capability.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.