CQD-SHAP: Explainable Complex Query Answering via Shapley Values
By: Parsa Abbasi, Stefan Heindorf
Potential Business Impact:
Explains why a computer picked an answer.
Complex query answering (CQA) goes beyond the well-studied link prediction task by addressing more sophisticated queries that require multi-hop reasoning over incomplete knowledge graphs (KGs). Research on neural and neurosymbolic CQA methods is still an emerging field. Almost all of these methods can be regarded as black-box models, which may raise concerns about user trust. Although neurosymbolic approaches like CQD are slightly more interpretable, allowing intermediate results to be tracked, the importance of different parts of the query remains unexplained. In this paper, we propose CQD-SHAP, a novel framework that computes the contribution of each query part to the ranking of a specific answer. This contribution explains the value of leveraging a neural predictor that can infer new knowledge from an incomplete KG, rather than a symbolic approach relying solely on existing facts in the KG. CQD-SHAP is formulated based on Shapley values from cooperative game theory and satisfies all the fundamental Shapley axioms. Automated evaluation of these explanations in terms of necessary and sufficient explanations, and comparisons with various baselines, shows the effectiveness of this approach for most query types.
Similar Papers
Efficient and Scalable Neural Symbolic Search for Knowledge Graph Complex Query Answering
Artificial Intelligence
Answers tough questions from smart computer brains faster.
UbiQVision: Quantifying Uncertainty in XAI for Image Recognition
CV and Pattern Recognition
Makes AI doctors' decisions more trustworthy.
Enhancing Interpretability for Vision Models via Shapley Value Optimization
CV and Pattern Recognition
Explains how computers make choices, clearly.