Codesota · Agentic AI · Web & Desktop Agents · WebArenaTasks/Agentic AI/Web & Desktop Agents
Web & Desktop Agents · benchmark dataset · 2023

WebArena: A Realistic Web Environment for Building Autonomous Agents.

812 long-horizon web navigation tasks across realistic web environments (e-commerce, social media, code repos, CMS). Tests ability to complete real-world browser tasks like making purchases, posting content, or querying databases.

Paper Submit a result
§ 01 · Leaderboard

Best published scores.

6 results indexed across 1 metric. Shaded row marks current SOTA; ties broken by submission date.


Primary
success-rate · higher is better
success-rate· primary
6 rows
#ModelOrgSubmittedPaper / codesuccess-rate
01Agent-E (GPT-4o)Emergence AIJul 2023WebArena: A Realistic Web Environment for Building Auton…73
02OpenAI Operator (CUA)OpenAIJan 2025official-announcement58.10
03Claude Opus 4APIAnthropicApr 2025METR: Measuring Autonomy in AI Systems (2025 Update)55
04Agent Q (GPT-4o)MultiOnJul 2023WebArena: A Realistic Web Environment for Building Auton…50.50
05Claude 3.7 SonnetAPIAnthropicFeb 2025Claude 3.7 Sonnet System Card35.10
06GPT-4 Turbo (2024)OpenAIJul 2023WebArena: A Realistic Web Environment for Building Auton…14.90
Fig 2 · Rows sorted by score within each metric. Shaded row marks SOTA. Dates reflect model or paper release where available, otherwise the date Codesota accessed the source.
§ 03 · Progress

1 steps
of state of the art.

Each row below marks a model that broke the previous record on success-rate. Intermediate submissions are kept in the leaderboard above; only SOTA-setting entries are re-listed here.

Higher scores win. Each subsequent entry improved upon the previous best.

SOTA line · success-rate
  1. Jul 26, 2023Agent-E (GPT-4o)Emergence AI73
Fig 3 · SOTA-setting models only. 1 entries span Jul 2023 Jul 2023.
§ 04 · Literature

3 papers
tied to this benchmark.

Every paper below corresponds to at least one row in the leaderboard above. Click through for the arXiv preprint and, when available, the reference implementation.

§ 06 · Contribute

Have a score that beats
this table?

Submit a checkpoint and a reproduction script. We will run it, publish the score, and — if it takes the top — annotate the step on the progress chart with your name.

Submit a result Read submission guide
What a submission needs
  • 01A public checkpoint or API endpoint
  • 02A reproduction script with frozen commit + seed
  • 03Declared evaluation environment (Python, deps)
  • 04One row per metric declared by this dataset
  • 05A contact so we can follow up on discrepancies
WebArena — Web & Desktop Agents benchmark · Codesota | CodeSOTA