Exposing Vulnerabilities in RL: A Novel Stealthy Backdoor Attack through Reward Poisoning
By: Bokang Zhang , Chaojun Lu , Jianhui Li and more
Potential Business Impact:
Makes AI agents learn bad habits secretly.
Reinforcement learning (RL) has achieved remarkable success across diverse domains, enabling autonomous systems to learn and adapt to dynamic environments by optimizing a reward function. However, this reliance on reward signals creates a significant security vulnerability. In this paper, we study a stealthy backdoor attack that manipulates an agent's policy by poisoning its reward signals. The effectiveness of this attack highlights a critical threat to the integrity of deployed RL systems and calls for urgent defenses against training-time manipulation. We evaluate the attack across classic control and MuJoCo environments. The backdoored agent remains highly stealthy in Hopper and Walker2D, with minimal performance drops of only 2.18 % and 4.59 % under non-triggered scenarios, while achieving strong attack efficacy with up to 82.31% and 71.27% declines under trigger conditions.
Similar Papers
TooBadRL: Trigger Optimization to Boost Effectiveness of Backdoor Attacks on Deep Reinforcement Learning
Cryptography and Security
Makes computer "brains" do bad things on command.
BadReward: Clean-Label Poisoning of Reward Models in Text-to-Image RLHF
Machine Learning (CS)
Makes AI art generators create bad images.
Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain
Cryptography and Security
Makes AI agents do bad things when tricked.