Score: 0

Exposing Vulnerabilities in RL: A Novel Stealthy Backdoor Attack through Reward Poisoning

Published: November 27, 2025 | arXiv ID: 2511.22415v1

By: Bokang Zhang , Chaojun Lu , Jianhui Li and more

Potential Business Impact:

Makes AI agents learn bad habits secretly.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning (RL) has achieved remarkable success across diverse domains, enabling autonomous systems to learn and adapt to dynamic environments by optimizing a reward function. However, this reliance on reward signals creates a significant security vulnerability. In this paper, we study a stealthy backdoor attack that manipulates an agent's policy by poisoning its reward signals. The effectiveness of this attack highlights a critical threat to the integrity of deployed RL systems and calls for urgent defenses against training-time manipulation. We evaluate the attack across classic control and MuJoCo environments. The backdoored agent remains highly stealthy in Hopper and Walker2D, with minimal performance drops of only 2.18 % and 4.59 % under non-triggered scenarios, while achieving strong attack efficacy with up to 82.31% and 71.27% declines under trigger conditions.

Page Count
11 pages

Category
Computer Science:
Cryptography and Security