Beyond Simulation: Benchmarking World Models for Planning and Causality in Autonomous Driving
By: Hunter Schofield , Mohammed Elmahgiubi , Kasra Rezaee and more
Potential Business Impact:
Tests if AI traffic simulators are good.
World models have become increasingly popular in acting as learned traffic simulators. Recent work has explored replacing traditional traffic simulators with world models for policy training. In this work, we explore the robustness of existing metrics to evaluate world models as traffic simulators to see if the same metrics are suitable for evaluating a world model as a pseudo-environment for policy training. Specifically, we analyze the metametric employed by the Waymo Open Sim-Agents Challenge (WOSAC) and compare world model predictions on standard scenarios where the agents are fully or partially controlled by the world model (partial replay). Furthermore, since we are interested in evaluating the ego action-conditioned world model, we extend the standard WOSAC evaluation domain to include agents that are causal to the ego vehicle. Our evaluations reveal a significant number of scenarios where top-ranking models perform well under no perturbation but fail when the ego agent is forced to replay the original trajectory. To address these cases, we propose new metrics to highlight the sensitivity of world models to uncontrollable objects and evaluate the performance of world models as pseudo-environments for policy training and analyze some state-of-the-art world models under these new metrics.
Similar Papers
A Survey of World Models for Autonomous Driving
Robotics
Helps self-driving cars predict and plan driving.
The Safety Challenge of World Models for Embodied AI Agents: A Review
Artificial Intelligence
Makes robots predict and act safely.
World-in-World: World Models in a Closed-Loop World
CV and Pattern Recognition
Helps robots learn to do tasks better.