Optimal Perturbation Budget Allocation for Data Poisoning in Offline Reinforcement Learning
By: Junnan Qiu, Jie Li
Potential Business Impact:
Tricks AI into making bad choices from old data.
Offline Reinforcement Learning (RL) enables policy optimization from static datasets but is inherently vulnerable to data poisoning attacks. Existing attack strategies typically rely on locally uniform perturbations, which treat all samples indiscriminately. This approach is inefficient, as it wastes the perturbation budget on low-impact samples, and lacks stealthiness due to significant statistical deviations. In this paper, we propose a novel Global Budget Allocation attack strategy. Leveraging the theoretical insight that a sample's influence on value function convergence is proportional to its Temporal Difference (TD) error, we formulate the attack as a global resource allocation problem. We derive a closed-form solution where perturbation magnitudes are assigned proportional to the TD-error sensitivity under a global L2 constraint. Empirical results on D4RL benchmarks demonstrate that our method significantly outperforms baseline strategies, achieving up to 80% performance degradation with minimal perturbations that evade detection by state-of-the-art statistical and spectral defenses.
Similar Papers
Exposing Vulnerabilities in RL: A Novel Stealthy Backdoor Attack through Reward Poisoning
Cryptography and Security
Makes AI agents learn bad habits secretly.
On Robustness of Linear Classifiers to Targeted Data Poisoning
Machine Learning (CS)
Finds fake data that tricks computer learning.
Provably Near-Optimal Distributionally Robust Reinforcement Learning in Online Settings
Machine Learning (CS)
Teaches robots to work safely in new places.