Codesota · General · Vision-Language Models · MMT-BenchTasks/General/Vision-Language Models
Vision-Language Models · benchmark dataset · EN

MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI.

MMT-Bench is a large, curated multimodal multitask benchmark for evaluating large vision-language models (LVLMs). It contains 31,325 multiple-choice visual questions covering 32 core meta-tasks and 162 subtasks spanning diverse multimodal scenarios (e.g., vehicle driving, embodied navigation) that require visual recognition, localization, reasoning, expert knowledge and planning. The benchmark is intended to provide a task-map style, comprehensive evaluation of LVLMs’ multitask capabilities; the project provides dataset files on Hugging Face, code on GitHub, and a public leaderboard. Dataset release metadata indicates an MIT license.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
MMT-Bench — Vision-Language Models benchmark · Codesota | CodeSOTA