
Web Bench
Launching today
A 10x better benchmark for AI browser agents
134 followers
Compare and benchmark different AI web browsing agents. Web Bench provides comprehensive performance metrics for AI agents navigating the web.
134 followers
Compare and benchmark different AI web browsing agents. Web Bench provides comprehensive performance metrics for AI agents navigating the web.
Skyvern
TL;DR: Web Bench is a new dataset to evaluate web browsing agents that consists of 5,750 tasks on 452 different websites, with 2,454 tasks being open sourced. It builds on the foundations of WebVoyager, which didn't represent the internet well because it only spanned 15 websites. Anthropic Sonnet 3.7 CUA is the current SOTA, with Skyvern being the best agent for WRITE-HEAVY tasks. The detailed results here.
I bet you've seen a bunch of flashy demos of web browsing agents, looked at the crazy high scores on the benchmarks and excitedly tried them out... only to realize they don't work as well as advertised
This is because the previous benchmark (WebVoyager) only spanned 643 tasks across 15 websites. While it was a great starting point, the internet’s adversarial nature towards browser automation and the difficulty of tasks involving mutating of data on a website.
As a result, the Skyvern and Halluminate and created a new benchmark to better quantify these failures. Our goal was to create a new consistent measurement system for AI Web Agents by expanding the foundations created by WebVoyager by:
Expanding the number of websites from 15 → 452, and tasks from 642 -> 5,750 to test agent performance on a wider variety of websites
Introduce the concept of READ vs WRITE tasks
READ tasks involve navigating websites and fetching data
WRITE tasks involve entering data, downloading files, logging in, solving 2FA, etc and were not well represented in the WebVoyager dataset
Measure the impact of browser infrastructure (eg access the websites, solve captchas, not crash, etc)
We ran the benchmark and open sourced 2454 of the tasks to help the industry move towards a new standard, and the results surprised us:
The best model is Anthropic's CUA model
All models did very poorly on write heavy tasks
Browser Infrastructure played a bigger role in the agents' ability to take actions than previously expected
If you're interested, read the full report here
Have any cool use-cases for browser agents? Reply below and let me know below👇
PrettyPolly
Awesome! Literally exactly what I needed. Have been working on an agentic product and, until now, have just been testing it using whatever wild task I dream up on any given day.
Having something as comprehensive as this means I can be objective about the quality/usefulness of what I’m building.
GL with the launch
Web Bench
@cwbuilds1 Thanks Chris!
Reworkd
Congrats on the launch folks! Huge eval hole with web agents so this work is really appreciated
Skyvern
@asim_shrestha1 <3