Combating Spurious Correlations in Graph Interpretability via Self-Reflection
By: Kecheng Cai, Chenyang Xu, Chao Peng
Potential Business Impact:
Helps computers find real patterns, not fake ones.
Interpretable graph learning has recently emerged as a popular research topic in machine learning. The goal is to identify the important nodes and edges of an input graph that are crucial for performing a specific graph reasoning task. A number of studies have been conducted in this area, and various benchmark datasets have been proposed to facilitate evaluation. Among them, one of the most challenging is the Spurious-Motif benchmark, introduced at ICLR 2022. The datasets in this synthetic benchmark are deliberately designed to include spurious correlations, making it particularly difficult for models to distinguish truly relevant structures from misleading patterns. As a result, existing methods exhibit significantly worse performance on this benchmark compared to others. In this paper, we focus on improving interpretability on the challenging Spurious-Motif datasets. We demonstrate that the self-reflection technique, commonly used in large language models to tackle complex tasks, can also be effectively adapted to enhance interpretability in datasets with strong spurious correlations. Specifically, we propose a self-reflection framework that can be integrated with existing interpretable graph learning methods. When such a method produces importance scores for each node and edge, our framework feeds these predictions back into the original method to perform a second round of evaluation. This iterative process mirrors how large language models employ self-reflective prompting to reassess their previous outputs. We further analyze the reasons behind this improvement from the perspective of graph representation learning, which motivates us to propose a fine-tuning training method based on this feedback mechanism.
Similar Papers
Let Samples Speak: Mitigating Spurious Correlation by Exploiting the Clusterness of Samples
CV and Pattern Recognition
Fixes computer learning mistakes caused by bad data.
Learning to Retrieve and Reason on Knowledge Graph through Active Self-Reflection
Computation and Language
Helps computers think better using connected facts.
Reflective Confidence: Correcting Reasoning Flaws via Online Self-Correction
Artificial Intelligence
Helps AI fix its own thinking mistakes.