Unveiling Uncertainty-Aware Autonomous Cooperative Learning Based Planning Strategy
By: Shiyao Zhang , Liwei Deng , Shuyu Zhang and more
Potential Business Impact:
Cars learn to drive safely together, even with mistakes.
In future intelligent transportation systems, autonomous cooperative planning (ACP), becomes a promising technique to increase the effectiveness and security of multi-vehicle interactions. However, multiple uncertainties cannot be fully addressed for existing ACP strategies, e.g. perception, planning, and communication uncertainties. To address these, a novel deep reinforcement learning-based autonomous cooperative planning (DRLACP) framework is proposed to tackle various uncertainties on cooperative motion planning schemes. Specifically, the soft actor-critic (SAC) with the implementation of gate recurrent units (GRUs) is adopted to learn the deterministic optimal time-varying actions with imperfect state information occurred by planning, communication, and perception uncertainties. In addition, the real-time actions of autonomous vehicles (AVs) are demonstrated via the Car Learning to Act (CARLA) simulation platform. Evaluation results show that the proposed DRLACP learns and performs cooperative planning effectively, which outperforms other baseline methods under different scenarios with imperfect AV state information.
Similar Papers
Automated Parking Trajectory Generation Using Deep Reinforcement Learning
Robotics
Teaches cars to park themselves perfectly.
UNCAP: Uncertainty-Guided Planning Using Natural Language Communication for Cooperative Autonomous Vehicles
Robotics
Cars talk with simple words to drive safer.
From Learning to Mastery: Achieving Safe and Efficient Real-World Autonomous Driving with Human-In-The-Loop Reinforcement Learning
Machine Learning (CS)
Teaches self-driving cars to learn safely from humans.