Toward Efficient and Robust Behavior Models for Multi-Agent Driving Simulation
By: Fabian Konstantinidis , Moritz Sackmann , Ulrich Hofmann and more
Scalable multi-agent driving simulation requires behavior models that are both realistic and computationally efficient. We address this by optimizing the behavior model that controls individual traffic participants. To improve efficiency, we adopt an instance-centric scene representation, where each traffic participant and map element is modeled in its own local coordinate frame. This design enables efficient, viewpoint-invariant scene encoding and allows static map tokens to be reused across simulation steps. To model interactions, we employ a query-centric symmetric context encoder with relative positional encodings between local frames. We use Adversarial Inverse Reinforcement Learning to learn the behavior model and propose an adaptive reward transformation that automatically balances robustness and realism during training. Experiments demonstrate that our approach scales efficiently with the number of tokens, significantly reducing training and inference times, while outperforming several agent-centric baselines in terms of positional accuracy and robustness.
Similar Papers
Realistic pedestrian-driver interaction modelling using multi-agent RL with human perceptual-motor constraints
Artificial Intelligence
Makes self-driving cars understand people better.
Flow Matching-Based Autonomous Driving Planning with Advanced Interactive Behavior Modeling
Robotics
Helps self-driving cars understand tricky driving situations.
SocialDriveGen: Generating Diverse Traffic Scenarios with Controllable Social Interactions
Multiagent Systems
Creates realistic driving simulations with different driver personalities.