Codesota · General · Computer Use Agents · MMB-GUI (MMBench-GUI)Tasks/General/Computer Use Agents
Computer Use Agents · benchmark dataset · EN

MMBench-GUI: Hierarchical Multi-Platform Evaluation Framework for GUI Agents.

MMBench-GUI (MMB-GUI) is a hierarchical, multi-platform benchmark for evaluating GUI automation / computer-use agents across Windows, macOS, Linux, iOS, Android and Web. The benchmark is organized into four progressive levels: (L1) GUI Content Understanding, (L2) Element Grounding, (L3) Task Automation, and (L4) Task Collaboration, covering core capabilities from visual understanding to multi-step cross-application task completion. It provides platform-specific splits (desktop, mobile, web) and annotations for grounding (e.g., element bounding boxes and types), tasks, and instructions. The benchmark also proposes an efficiency-aware metric (Efficiency-Quality Area, EQA) to measure both success and action efficiency. The L2 (MMBench-GUI / MMB-GUI Element Grounding) configuration is explicitly intended for testing cross-platform visual grounding (mobile, web, desktop splits). Source and metadata available on Hugging Face (license: Apache-2.0) and the paper is on arXiv (arXiv:2507.19478).

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies