Codesota · Agentic AI · Web & Desktop Agents · OSWorldTasks/Agentic AI/Web & Desktop Agents
Web & Desktop Agents · benchmark dataset · 2024

OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments.

369 real computer tasks across Windows, macOS, and Ubuntu requiring GUI interaction. Tests agents operating full desktop apps like spreadsheets, image editors, and terminals. Much harder than web-only benchmarks.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

13 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
success-rate · higher is better
success-rate· primary
13 rows
#ModelOrgSubmittedPaper / codesuccess-rate
01CoAct-1SalesforceApr 2026arxiv60.76
02UI-TARS-2ByteDanceApr 2026arxiv47.50
03GTA1 (7B)SalesforceApr 2026arxiv45.20
04UI-TARS-1.5ByteDanceApr 2026arxiv42.50
05Agent S2 (Gemini 2.5)Simular AIApr 2026arxiv41.40
06OpenAI CUA (o1)OpenAIApr 2026openai-blog38.10
07Agent S2 (Claude 3.7)Simular AIApr 2026arxiv34.50
08Claude 3.7 SonnetAPIAnthropicApr 2026arxiv28
09UI-TARS-72BByteDanceApr 2026arxiv24.60
10Claude Computer UseAnthropicApr 2026anthropic-blog22
11Claude Computer UseAnthropicApr 2024OSWorld: Benchmarking Multimodal Agents for Open-Ended T…14.90
12UFO (GPT-4V)OSSMicrosoftApr 2024OSWorld: Benchmarking Multimodal Agents for Open-Ended T…9.40
13GPT-4 Turbo (2024)OpenAIApr 2024OSWorld: Benchmarking Multimodal Agents for Open-Ended T…6.50
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

2 steps
of state of the art.

Each row below marks a model that broke the previous record on success-rate. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · success-rate
  1. Apr 11, 2024Claude Computer UseAnthropic14.90
  2. Apr 9, 2026CoAct-1Salesforce60.76
Fig 3 · SOTA-setting models only. 2 entries span Apr 2024 Apr 2026.
§ 04 · Literature

1 paper
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies