Score: 2

Towards Reliable and Interpretable Document Question Answering via VLMs

Published: September 12, 2025 | arXiv ID: 2509.10129v2

By: Alessio Chen , Simone Giovannini , Andrea Gemelli and more

Potential Business Impact:

Finds answers in documents more accurately.

Business Areas:
Semantic Search Internet Services

Vision-Language Models (VLMs) have shown strong capabilities in document understanding, particularly in identifying and extracting textual information from complex documents. Despite this, accurately localizing answers within documents remains a major challenge, limiting both interpretability and real-world applicability. To address this, we introduce DocExplainerV0, a plug-and-play bounding-box prediction module that decouples answer generation from spatial localization. This design makes it applicable to existing VLMs, including proprietary systems where fine-tuning is not feasible. Through systematic evaluation, we provide quantitative insights into the gap between textual accuracy and spatial grounding, showing that correct answers often lack reliable localization. Our standardized framework highlights these shortcomings and establishes a benchmark for future research toward more interpretable and robust document information extraction VLMs.

Country of Origin
🇮🇹 Italy

Repos / Data Links

Page Count
6 pages

Category
Computer Science:
Computation and Language