Toward Efficient and Robust Behavior Models for Multi-Agent Driving Simulation
By: Fabian Konstantinidis , Moritz Sackmann , Ulrich Hofmann and more
Potential Business Impact:
Makes self-driving cars learn faster and better.
Scalable multi-agent driving simulation requires behavior models that are both realistic and computationally efficient. We address this by optimizing the behavior model that controls individual traffic participants. To improve efficiency, we adopt an instance-centric scene representation, where each traffic participant and map element is modeled in its own local coordinate frame. This design enables efficient, viewpoint-invariant scene encoding and allows static map tokens to be reused across simulation steps. To model interactions, we employ a query-centric symmetric context encoder with relative positional encodings between local frames. We use Adversarial Inverse Reinforcement Learning to learn the behavior model and propose an adaptive reward transformation that automatically balances robustness and realism during training. Experiments demonstrate that our approach scales efficiently with the number of tokens, significantly reducing training and inference times, while outperforming several agent-centric baselines in terms of positional accuracy and robustness.
Similar Papers
Toward Efficient and Robust Behavior Models for Multi-Agent Driving Simulation
Robotics
Makes self-driving cars learn faster and better.
Post-Training and Test-Time Scaling of Generative Agent Behavior Models for Interactive Autonomous Driving
Robotics
Makes self-driving cars safer and react better.
HAD-Gen: Human-like and Diverse Driving Behavior Modeling for Controllable Scenario Generation
Robotics
Makes self-driving cars act more like real people.