Adversarial Attacks on Robotic Vision Language Action Models
By: Eliot Krzysztof Jones , Alexander Robey , Andy Zou and more
Potential Business Impact:
Robots can be tricked into doing anything.
The emergence of vision-language-action models (VLAs) for end-to-end control is reshaping the field of robotics by enabling the fusion of multimodal sensory inputs at the billion-parameter scale. The capabilities of VLAs stem primarily from their architectures, which are often based on frontier large language models (LLMs). However, LLMs are known to be susceptible to adversarial misuse, and given the significant physical risks inherent to robotics, questions remain regarding the extent to which VLAs inherit these vulnerabilities. Motivated by these concerns, in this work we initiate the study of adversarial attacks on VLA-controlled robots. Our main algorithmic contribution is the adaptation and application of LLM jailbreaking attacks to obtain complete control authority over VLAs. We find that textual attacks, which are applied once at the beginning of a rollout, facilitate full reachability of the action space of commonly used VLAs and often persist over longer horizons. This differs significantly from LLM jailbreaking literature, as attacks in the real world do not have to be semantically linked to notions of harm. We make all code available at https://github.com/eliotjones1/robogcg .
Similar Papers
AttackVLA: Benchmarking Adversarial and Backdoor Attacks on Vision-Language-Action Models
Cryptography and Security
Makes robots do bad things when told.
FreezeVLA: Action-Freezing Attacks against Vision-Language-Action Models
CV and Pattern Recognition
Makes robots stop working when tricked.
Vision-Language-Action Models for Robotics: A Review Towards Real-World Applications
Robotics
Robots learn new jobs by seeing and hearing.