Score: 2

Skeletonization-Based Adversarial Perturbations on Large Vision Language Model's Mathematical Text Recognition

Published: January 8, 2026 | arXiv ID: 2601.04752v1

By: Masatomo Yoshida , Haruto Namura , Nicola Adami and more

Potential Business Impact:

Makes AI understand tricky math text better.

Business Areas:
Image Recognition Data and Analytics, Software

This work explores the visual capabilities and limitations of foundation models by introducing a novel adversarial attack method utilizing skeletonization to reduce the search space effectively. Our approach specifically targets images containing text, particularly mathematical formula images, which are more challenging due to their LaTeX conversion and intricate structure. We conduct a detailed evaluation of both character and semantic changes between original and adversarially perturbed outputs to provide insights into the models' visual interpretation and reasoning abilities. The effectiveness of our method is further demonstrated through its application to ChatGPT, which shows its practical implications in real-world scenarios.

Country of Origin
🇮🇹 🇯🇵 Italy, Japan

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition