ScreenSpot is a cross-platform screenshot grounding benchmark introduced alongside the SeeClick visual GUI agent (Cheng et al., 2024). It contains screenshot images from mobile, web and desktop environments with grounding annotations that map natural-language instructions (or referring expressions) to on-screen UI elements (bounding boxes). The dataset is intended for GUI visual grounding / screenshot understanding tasks (i.e., locating the UI element referred to by a text query) and has been released in HF-hosted variants (e.g. rootsautomation/ScreenSpot and ScreenSpot-v2 entries). A Hugging Face preview of a ScreenSpot-v2 variant shows ~1,272 samples and fields such as image, instruction, bbox, data_source and data_type. Key source: SeeClick (Cheng et al., 2024) which describes constructing ScreenSpot to cover mobile, desktop and web for improving GUI grounding.
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.