Red-Team Multi-Agent Reinforcement Learning for Emergency Braking Scenario
By: Yinsong Chen , Kaifeng Wang , Xiaoqiang Meng and more
Potential Business Impact:
Finds hidden dangers for self-driving cars.
Current research on decision-making in safety-critical scenarios often relies on inefficient data-driven scenario generation or specific modeling approaches, which fail to capture corner cases in real-world contexts. To address this issue, we propose a Red-Team Multi-Agent Reinforcement Learning framework, where background vehicles with interference capabilities are treated as red-team agents. Through active interference and exploration, red-team vehicles can uncover corner cases outside the data distribution. The framework uses a Constraint Graph Representation Markov Decision Process, ensuring that red-team vehicles comply with safety rules while continuously disrupting the autonomous vehicles (AVs). A policy threat zone model is constructed to quantify the threat posed by red-team vehicles to AVs, inducing more extreme actions to increase the danger level of the scenario. Experimental results show that the proposed framework significantly impacts AVs decision-making safety and generates various corner cases. This method also offers a novel direction for research in safety-critical scenarios.
Similar Papers
Dynamic Residual Safe Reinforcement Learning for Multi-Agent Safety-Critical Scenarios Decision-Making
Robotics
Helps self-driving cars avoid crashes safely.
RedDebate: Safer Responses through Multi-Agent Red Teaming Debates
Computation and Language
AI learns to be safer by arguing with itself.
Predictive Red Teaming: Breaking Policies Without Breaking Robots
Robotics
Finds robot mistakes before they happen.