Adversarial Robustness of Vision in Open Foundation Models
By: Jonathon Fox, William J Buchanan, Pavlos Papadopoulos
Potential Business Impact:
Makes AI see things wrong with tiny changes.
With the increase in deep learning, it becomes increasingly difficult to understand the model in which AI systems can identify objects. Thus, an adversary could aim to modify an image by adding unseen elements, which will confuse the AI in its recognition of an entity. This paper thus investigates the adversarial robustness of LLaVA-1.5-13B and Meta's Llama 3.2 Vision-8B-2. These are tested for untargeted PGD (Projected Gradient Descent) against the visual input modality, and empirically evaluated on the Visual Question Answering (VQA) v2 dataset subset. The results of these adversarial attacks are then quantified using the standard VQA accuracy metric. This evaluation is then compared with the accuracy degradation (accuracy drop) of LLaVA and Llama 3.2 Vision. A key finding is that Llama 3.2 Vision, despite a lower baseline accuracy in this setup, exhibited a smaller drop in performance under attack compared to LLaVA, particularly at higher perturbation levels. Overall, the findings confirm that the vision modality represents a viable attack vector for degrading the performance of contemporary open-weight VLMs, including Meta's Llama 3.2 Vision. Furthermore, they highlight that adversarial robustness does not necessarily correlate directly with standard benchmark performance and may be influenced by underlying architectural and training factors.
Similar Papers
Transferable Adversarial Attacks on Black-Box Vision-Language Models
CV and Pattern Recognition
Makes AI misinterpret pictures to trick it.
When Alignment Fails: Multimodal Adversarial Attacks on Vision-Language-Action Models
CV and Pattern Recognition
Makes robots understand and obey commands better.
Adversarial Attacks on Robotic Vision Language Action Models
Robotics
Robots can be tricked into doing anything.