Score: 0

Policy Disruption in Reinforcement Learning:Adversarial Attack with Large Language Models and Critical State Identification

Published: July 24, 2025 | arXiv ID: 2507.18113v1

By: Junyong Jiang , Buwei Tian , Chenxing Xu and more

Potential Business Impact:

Tricks AI into making bad choices.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement learning (RL) has achieved remarkable success in fields like robotics and autonomous driving, but adversarial attacks designed to mislead RL systems remain challenging. Existing approaches often rely on modifying the environment or policy, limiting their practicality. This paper proposes an adversarial attack method in which existing agents in the environment guide the target policy to output suboptimal actions without altering the environment. We propose a reward iteration optimization framework that leverages large language models (LLMs) to generate adversarial rewards explicitly tailored to the vulnerabilities of the target agent, thereby enhancing the effectiveness of inducing the target agent toward suboptimal decision-making. Additionally, a critical state identification algorithm is designed to pinpoint the target agent's most vulnerable states, where suboptimal behavior from the victim leads to significant degradation in overall performance. Experimental results in diverse environments demonstrate the superiority of our method over existing approaches.

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)