SafePlanner: Testing Safety of the Automated Driving System Plan Model
By: Dohyun Kim , Sanggu Han , Sangmin Woo and more
In this work, we present SafePlanner, a systematic testing framework for identifying safety-critical flaws in the Plan model of Automated Driving Systems (ADS). SafePlanner targets two core challenges: generating structurally meaningful test scenarios and detecting hazardous planning behaviors. To maximize coverage, SafePlanner performs a structural analysis of the Plan model implementation - specifically, its scene-transition logic and hierarchical control flow - and uses this insight to extract feasible scene transitions from code. It then composes test scenarios by combining these transitions with non-player vehicle (NPC) behaviors. Guided fuzzing is applied to explore the behavioral space of the Plan model under these scenarios. We evaluate SafePlanner on Baidu Apollo, a production-grade level 4 ADS. It generates 20635 test cases and detects 520 hazardous behaviors, grouped into 15 root causes through manual analysis. For four of these, we applied patches based on our analysis; the issues disappeared, and no apparent side effects were observed. SafePlanner achieves 83.63 percent function and 63.22 percent decision coverage on the Plan model, outperforming baselines in both bug discovery and efficiency.
Similar Papers
SAFE: Harnessing LLM for Scenario-Driven ADS Testing from Multimodal Crash Data
Software Engineering
Makes self-driving cars safer by testing them better.
Efficient Safety Testing of Autonomous Vehicles via Adaptive Search over Crash-Derived Scenarios
Robotics
Tests self-driving cars faster in dangerous situations.
Can AI Generate more Comprehensive Test Scenarios? Review on Automated Driving Systems Test Scenario Generation Methods
Software Engineering
Tests self-driving cars with AI-made situations.