SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning
By: Tairan Huang , Yulin Jin , Junxu Liu and more
Potential Business Impact:
Tricks robots into making bad choices.
Visual reinforcement learning has achieved remarkable progress in visual control and robotics, but its vulnerability to adversarial perturbations remains underexplored. Most existing black-box attacks focus on vector-based or discrete-action RL, and their effectiveness on image-based continuous control is limited by the large action space and excessive environment queries. We propose SEBA, a sample-efficient framework for black-box adversarial attacks on visual RL agents. SEBA integrates a shadow Q model that estimates cumulative rewards under adversarial conditions, a generative adversarial network that produces visually imperceptible perturbations, and a world model that simulates environment dynamics to reduce real-world queries. Through a two-stage iterative training procedure that alternates between learning the shadow model and refining the generator, SEBA achieves strong attack performance while maintaining efficiency. Experiments on MuJoCo and Atari benchmarks show that SEBA significantly reduces cumulative rewards, preserves visual fidelity, and greatly decreases environment interactions compared to prior black-box and white-box methods.
Similar Papers
Constrained Black-Box Attacks Against Multi-Agent Reinforcement Learning
Machine Learning (CS)
Makes smart robots easily tricked by bad data.
Adversarial Agents: Black-Box Evasion Attacks with Reinforcement Learning
Cryptography and Security
Teaches computers to trick other computers.
How stealthy is stealthy? Studying the Efficacy of Black-Box Adversarial Attacks in the Real World
Cryptography and Security
Makes self-driving cars harder to trick.