Score: 0

Towards Robust Deep Reinforcement Learning against Environmental State Perturbation

Published: June 10, 2025 | arXiv ID: 2506.08961v1

By: Chenxu Wang, Huaping Liu

Potential Business Impact:

Makes robots learn better even when things change.

Business Areas:
Autonomous Vehicles Transportation

Adversarial attacks and robustness in Deep Reinforcement Learning (DRL) have been widely studied in various threat models; however, few consider environmental state perturbations, which are natural in embodied scenarios. To improve the robustness of DRL agents, we formulate the problem of environmental state perturbation, introducing a preliminary non-targeted attack method as a calibration adversary, and then propose a defense framework, named Boosted Adversarial Training (BAT), which first tunes the agents via supervised learning to avoid catastrophic failure and subsequently adversarially trains the agent with reinforcement learning. Extensive experimental results substantiate the vulnerability of mainstream agents under environmental state perturbations and the effectiveness of our proposed attack. The defense results demonstrate that while existing robust reinforcement learning algorithms may not be suitable, our BAT framework can significantly enhance the robustness of agents against environmental state perturbations across various situations.

Page Count
8 pages

Category
Computer Science:
Machine Learning (CS)