Towards Reliable and Interpretable Document Question Answering via VLMs
By: Alessio Chen , Simone Giovannini , Andrea Gemelli and more
Potential Business Impact:
Finds answers in documents more accurately.
Vision-Language Models (VLMs) have shown strong capabilities in document understanding, particularly in identifying and extracting textual information from complex documents. Despite this, accurately localizing answers within documents remains a major challenge, limiting both interpretability and real-world applicability. To address this, we introduce DocExplainerV0, a plug-and-play bounding-box prediction module that decouples answer generation from spatial localization. This design makes it applicable to existing VLMs, including proprietary systems where fine-tuning is not feasible. Through systematic evaluation, we provide quantitative insights into the gap between textual accuracy and spatial grounding, showing that correct answers often lack reliable localization. Our standardized framework highlights these shortcomings and establishes a benchmark for future research toward more interpretable and robust document information extraction VLMs.
Similar Papers
Towards Reliable and Interpretable Document Question Answering via VLMs
Computation and Language
Helps computers find exact answers in documents.
VLMs Guided Interpretable Decision Making for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars make safer, clearer choices.
Look, Recite, Then Answer: Enhancing VLM Performance via Self-Generated Knowledge Hints
CV and Pattern Recognition
Helps computers see plants better, not guess.