Skeletonization-Based Adversarial Perturbations on Large Vision Language Model's Mathematical Text Recognition
By: Masatomo Yoshida , Haruto Namura , Nicola Adami and more
Potential Business Impact:
Makes AI understand tricky math text better.
This work explores the visual capabilities and limitations of foundation models by introducing a novel adversarial attack method utilizing skeletonization to reduce the search space effectively. Our approach specifically targets images containing text, particularly mathematical formula images, which are more challenging due to their LaTeX conversion and intricate structure. We conduct a detailed evaluation of both character and semantic changes between original and adversarially perturbed outputs to provide insights into the models' visual interpretation and reasoning abilities. The effectiveness of our method is further demonstrated through its application to ChatGPT, which shows its practical implications in real-world scenarios.
Similar Papers
Transferable Adversarial Attacks on Black-Box Vision-Language Models
CV and Pattern Recognition
Makes AI misinterpret pictures to trick it.
Attention-Guided Patch-Wise Sparse Adversarial Attacks on Vision-Language-Action Models
CV and Pattern Recognition
Tricks robots into making wrong moves.
Semantically Guided Adversarial Testing of Vision Models Using Language Models
CV and Pattern Recognition
Makes AI models more easily fooled.