Score: 0

SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning

Published: November 12, 2025 | arXiv ID: 2511.09681v1

By: Tairan Huang , Yulin Jin , Junxu Liu and more

Potential Business Impact:

Tricks robots into making bad choices.

Business Areas:
A/B Testing Data and Analytics

Visual reinforcement learning has achieved remarkable progress in visual control and robotics, but its vulnerability to adversarial perturbations remains underexplored. Most existing black-box attacks focus on vector-based or discrete-action RL, and their effectiveness on image-based continuous control is limited by the large action space and excessive environment queries. We propose SEBA, a sample-efficient framework for black-box adversarial attacks on visual RL agents. SEBA integrates a shadow Q model that estimates cumulative rewards under adversarial conditions, a generative adversarial network that produces visually imperceptible perturbations, and a world model that simulates environment dynamics to reduce real-world queries. Through a two-stage iterative training procedure that alternates between learning the shadow model and refining the generator, SEBA achieves strong attack performance while maintaining efficiency. Experiments on MuJoCo and Atari benchmarks show that SEBA significantly reduces cumulative rewards, preserves visual fidelity, and greatly decreases environment interactions compared to prior black-box and white-box methods.

Country of Origin
🇭🇰 Hong Kong

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)