Graph Diffusion Counterfactual Explanation
By: David Bechtoldt, Sidney Bender
Potential Business Impact:
Helps AI explain why it makes graph decisions.
Machine learning models that operate on graph-structured data, such as molecular graphs or social networks, often make accurate predictions but offer little insight into why certain predictions are made. Counterfactual explanations address this challenge by seeking the closest alternative scenario where the model's prediction would change. Although counterfactual explanations are extensively studied in tabular data and computer vision, the graph domain remains comparatively underexplored. Constructing graph counterfactuals is intrinsically difficult because graphs are discrete and non-euclidean objects. We introduce Graph Diffusion Counterfactual Explanation, a novel framework for generating counterfactual explanations on graph data, combining discrete diffusion models and classifier-free guidance. We empirically demonstrate that our method reliably generates in-distribution as well as minimally structurally different counterfactuals for both discrete classification targets and continuous properties.
Similar Papers
LeapFactual: Reliable Visual Counterfactual Explanation Using Conditional Flow Matching
Machine Learning (CS)
Shows how to change answers to be correct.
Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation
Machine Learning (CS)
Makes AI decisions easy to understand with stories.
Counterfactual Forecasting of Human Behavior using Generative AI and Causal Graphs
Machine Learning (CS)
Predicts how users will act if things change.