DocVXQA: Context-Aware Visual Explanations for Document Question Answering
By: Mohamed Ali Souibgui , Changkyu Choi , Andrey Barsky and more
Potential Business Impact:
Shows where the computer found the answer.
We propose DocVXQA, a novel framework for visually self-explainable document question answering. The framework is designed not only to produce accurate answers to questions but also to learn visual heatmaps that highlight contextually critical regions, thereby offering interpretable justifications for the model's decisions. To integrate explanations into the learning process, we quantitatively formulate explainability principles as explicit learning objectives. Unlike conventional methods that emphasize only the regions pertinent to the answer, our framework delivers explanations that are \textit{contextually sufficient} while remaining \textit{representation-efficient}. This fosters user trust while achieving a balance between predictive performance and interpretability in DocVQA applications. Extensive experiments, including human evaluation, provide strong evidence supporting the effectiveness of our method. The code is available at https://github.com/dali92002/DocVXQA.
Similar Papers
ChartQA-X: Generating Explanations for Visual Chart Reasoning
CV and Pattern Recognition
Helps computers explain charts and answer questions.
ProtoVQA: An Adaptable Prototypical Framework for Explainable Fine-Grained Visual Question Answering
CV and Pattern Recognition
Helps computers explain why they answer questions.
MedXplain-VQA: Multi-Component Explainable Medical Visual Question Answering
CV and Pattern Recognition
Shows doctors why AI suggests a diagnosis.