Score: 1

REAL: Benchmarking Autonomous Agents on Deterministic Simulations of Real Websites

Published: April 15, 2025 | arXiv ID: 2504.11543v2

By: Divyansh Garg , Shaun VanWeelden , Diego Caples and more

Potential Business Impact:

Tests AI on real-world website tasks.

Business Areas:
Simulation Software

We introduce REAL, a benchmark and framework for multi-turn agent evaluations on deterministic simulations of real-world websites. REAL comprises high-fidelity, deterministic replicas of 11 widely-used websites across domains such as e-commerce, travel, communication, and professional networking. We also release a benchmark consisting of 112 practical tasks that mirror everyday complex user interactions requiring both accurate information retrieval and state-changing actions. All interactions occur within this fully controlled setting, eliminating safety risks and enabling robust, reproducible evaluation of agent capability and reliability. Our novel evaluation framework combines programmatic checks of website state for action-based tasks with rubric-guided LLM-based judgments for information retrieval. The framework supports both open-source and proprietary agent systems through a flexible evaluation harness that accommodates black-box commands within browser environments, allowing research labs to test agentic systems without modification. Our empirical results show that frontier language models achieve at most a 41% success rate on REAL, highlighting critical gaps in autonomous web navigation and task completion capabilities. Our framework supports easy integration of new tasks, reproducible evaluation, and scalable post-training data generation, marking a significant step forward in evaluating and advancing agent capabilities.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
Artificial Intelligence