MMVP (Multimodal Visual Patterns) is a small benchmark created to study systematic visual shortcomings of modern multimodal/vision-language models. It focuses on “CLIP‑blind” image pairs — images that CLIP-style embeddings consider similar despite clear visual differences — and categorizes failures into nine basic visual pattern classes (e.g., camera perspective, occlusion, small parts, etc.). The collection is intended for perception and reasoning evaluation of multimodal LLMs/VLMs (the authors evaluate models such as GPT-4V). The Hugging Face mirror contains ~300 images (MMVP) and a labeled subset/variant for VLM evaluation (MMVP_VLM, ~270 examples, 9 classes). The dataset was released with the paper “Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs” and the accompanying code/release on GitHub.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.