Towards Robust Deep Reinforcement Learning against Environmental State Perturbation
By: Chenxu Wang, Huaping Liu
Potential Business Impact:
Makes robots learn better even when things change.
Adversarial attacks and robustness in Deep Reinforcement Learning (DRL) have been widely studied in various threat models; however, few consider environmental state perturbations, which are natural in embodied scenarios. To improve the robustness of DRL agents, we formulate the problem of environmental state perturbation, introducing a preliminary non-targeted attack method as a calibration adversary, and then propose a defense framework, named Boosted Adversarial Training (BAT), which first tunes the agents via supervised learning to avoid catastrophic failure and subsequently adversarially trains the agent with reinforcement learning. Extensive experimental results substantiate the vulnerability of mainstream agents under environmental state perturbations and the effectiveness of our proposed attack. The defense results demonstrate that while existing robust reinforcement learning algorithms may not be suitable, our BAT framework can significantly enhance the robustness of agents against environmental state perturbations across various situations.
Similar Papers
State-Aware Perturbation Optimization for Robust Deep Reinforcement Learning
Machine Learning (CS)
Makes robots safer by fooling them with tricky inputs.
Robust Deep Reinforcement Learning in Robotics via Adaptive Gradient-Masked Adversarial Attacks
Machine Learning (CS)
Tricks robots into making bad choices.
Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models via Future State Perturbation
Machine Learning (CS)
Makes self-driving cars safer by testing their reactions.