OSWorld-G is a desktop GUI grounding benchmark introduced in the paper "Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis" (arXiv:2505.13227). It is designed to evaluate grounding capability for desktop applications — mapping natural language instructions to specific on-screen elements and actions. OSWorld-G comprises 564 finely annotated examples spanning diverse task types including text matching, element recognition, layout understanding, and precise manipulation. The project also releases a much larger synthetic training dataset (Jedi, ~4 million examples) and code/models; the benchmark, data pipeline, and code are open-sourced (GitHub: xlang-ai/OSWorld-G) and a Hugging Face dataset release exists (MMInstruction/OSWorld-G).
No results indexed yet — be the first to submit a score.
Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.