Score: 0

DocVXQA: Context-Aware Visual Explanations for Document Question Answering

Published: May 12, 2025 | arXiv ID: 2505.07496v1

By: Mohamed Ali Souibgui , Changkyu Choi , Andrey Barsky and more

Potential Business Impact:

Shows where the computer found the answer.

Business Areas:
Semantic Search Internet Services

We propose DocVXQA, a novel framework for visually self-explainable document question answering. The framework is designed not only to produce accurate answers to questions but also to learn visual heatmaps that highlight contextually critical regions, thereby offering interpretable justifications for the model's decisions. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning objectives. Unlike conventional methods that emphasize only the regions pertinent to the answer, our framework delivers explanations that are \textit{contextually sufficient} while remaining \textit{representation-efficient}. This fosters user trust while achieving a balance between predictive performance and interpretability in DocVQA applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method. The code is available at https://github.com/dali92002/DocVXQA.

Page Count
21 pages

Category
Computer Science:
CV and Pattern Recognition