MMT-Bench is a large, curated multimodal multitask benchmark for evaluating large vision-language models (LVLMs). It contains 31,325 multiple-choice visual questions covering 32 core meta-tasks and 162 subtasks spanning diverse multimodal scenarios (e.g., vehicle driving, embodied navigation) that require visual recognition, localization, reasoning, expert knowledge and planning. The benchmark is intended to provide a task-map style, comprehensive evaluation of LVLMs’ multitask capabilities; the project provides dataset files on Hugging Face, code on GitHub, and a public leaderboard. Dataset release metadata indicates an MIT license.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.