Counterfeit Answers: Adversarial Forgery against OCR-Free Document Visual Question Answering
By: Marco Pintore , Maura Pintor , Dimosthenis Karatzas and more
Potential Business Impact:
Makes AI believe fake words in documents.
Document Visual Question Answering (DocVQA) enables end-to-end reasoning grounded on information present in a document input. While recent models have shown impressive capabilities, they remain vulnerable to adversarial attacks. In this work, we introduce a novel attack scenario that aims to forge document content in a visually imperceptible yet semantically targeted manner, allowing an adversary to induce specific or generally incorrect answers from a DocVQA model. We develop specialized attack algorithms that can produce adversarially forged documents tailored to different attackers' goals, ranging from targeted misinformation to systematic model failure scenarios. We demonstrate the effectiveness of our approach against two end-to-end state-of-the-art models: Pix2Struct, a vision-language transformer that jointly processes image and text through sequence-to-sequence modeling, and Donut, a transformer-based model that directly extracts text and answers questions from document images. Our findings highlight critical vulnerabilities in current DocVQA systems and call for the development of more robust defenses.
Similar Papers
FlipVQA-Miner: Cross-Page Visual Question-Answer Mining from Textbooks
Artificial Intelligence
Makes AI smarter using old school books.
VQ-VA World: Towards High-Quality Visual Question-Visual Answering
CV and Pattern Recognition
Makes computers draw pictures from questions.
QAVA: Query-Agnostic Visual Attack to Large Vision-Language Models
CV and Pattern Recognition
Makes AI models fooled by any question.