Codesota · General · Computer Use Agents · UI-V (UI-Vision)Tasks/General/Computer Use Agents
Computer Use Agents · benchmark dataset · EN

UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction.

UI-Vision (UI-Vision: A Desktop-centric GUI Benchmark for Visual Perception and Interaction) is a license-permissive benchmark for evaluating desktop GUI perception and interaction. It contains dense, high-quality annotations of human demonstrations across a wide range of real-world desktop applications (the paper reports 83 applications) including bounding boxes and UI element labels, action trajectories (clicks, drag-and-drop, and keyboard inputs), and layout information. The benchmark defines three evaluation tasks — Element Grounding, Layout Grounding, and Action Prediction — with metrics to measure fine-to-coarse agent performance in desktop environments. The dataset is hosted on Hugging Face (ServiceNow/ui-vision) under an MIT license; the HF preview shows a train split (≈1.46k rows) and the repository metadata classifies it as image-text-to-text / image modality.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

No results indexed yet — be the first to submit a score.

No benchmark results indexed yet
§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies