Codesota · General · Computer Use Agents · SSv2 (Screenshot-v2)Tasks/General/Computer Use Agents
Computer Use Agents · benchmark dataset · EN

ScreenSpot (ScreenSpot-v2).

ScreenSpot is a cross-platform screenshot grounding benchmark introduced alongside the SeeClick visual GUI agent (Cheng et al., 2024). It contains screenshot images from mobile, web and desktop environments with grounding annotations that map natural-language instructions (or referring expressions) to on-screen UI elements (bounding boxes). The dataset is intended for GUI visual grounding / screenshot understanding tasks (i.e., locating the UI element referred to by a text query) and has been released in HF-hosted variants (e.g. rootsautomation/ScreenSpot and ScreenSpot-v2 entries). A Hugging Face preview of a ScreenSpot-v2 variant shows ~1,272 samples and fields such as image, instruction, bbox, data_source and data_type. Key source: SeeClick (Cheng et al., 2024) which describes constructing ScreenSpot to cover mobile, desktop and web for improving GUI grounding.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies