Policy Disruption in Reinforcement Learning:Adversarial Attack with Large Language Models and Critical State Identification
By: Junyong Jiang , Buwei Tian , Chenxing Xu and more
Potential Business Impact:
Tricks AI into making bad choices.
Reinforcement learning (RL) has achieved remarkable success in fields like robotics and autonomous driving, but adversarial attacks designed to mislead RL systems remain challenging. Existing approaches often rely on modifying the environment or policy, limiting their practicality. This paper proposes an adversarial attack method in which existing agents in the environment guide the target policy to output suboptimal actions without altering the environment. We propose a reward iteration optimization framework that leverages large language models (LLMs) to generate adversarial rewards explicitly tailored to the vulnerabilities of the target agent, thereby enhancing the effectiveness of inducing the target agent toward suboptimal decision-making. Additionally, a critical state identification algorithm is designed to pinpoint the target agent's most vulnerable states, where suboptimal behavior from the victim leads to significant degradation in overall performance. Experimental results in diverse environments demonstrate the superiority of our method over existing approaches.
Similar Papers
Large Language Model-Based Reward Design for Deep Reinforcement Learning-Driven Autonomous Cyber Defense
Machine Learning (CS)
Teaches computers to defend against cyberattacks.
Reward-Preserving Attacks For Robust Reinforcement Learning
Machine Learning (CS)
Makes robots learn safely even when tricked.
Neutral Agent-based Adversarial Policy Learning against Deep Reinforcement Learning in Multi-party Open Systems
Machine Learning (CS)
Makes AI agents act badly without talking to them.