Faithful and Stable Neuron Explanations for Trustworthy Mechanistic Interpretability
By: Ge Yan , Tuomas Oikarinen , Tsui-Wei and more
Neuron identification is a popular tool in mechanistic interpretability, aiming to uncover the human-interpretable concepts represented by individual neurons in deep networks. While algorithms such as Network Dissection and CLIP-Dissect achieve great empirical success, a rigorous theoretical foundation remains absent, which is crucial to enable trustworthy and reliable explanations. In this work, we observe that neuron identification can be viewed as the inverse process of machine learning, which allows us to derive guarantees for neuron explanations. Based on this insight, we present the first theoretical analysis of two fundamental challenges: (1) Faithfulness: whether the identified concept faithfully represents the neuron's underlying function and (2) Stability: whether the identification results are consistent across probing datasets. We derive generalization bounds for widely used similarity metrics (e.g. accuracy, AUROC, IoU) to guarantee faithfulness, and propose a bootstrap ensemble procedure that quantifies stability along with BE (Bootstrap Explanation) method to generate concept prediction sets with guaranteed coverage probability. Experiments on both synthetic and real data validate our theoretical results and demonstrate the practicality of our method, providing an important step toward trustworthy neuron identification.
Similar Papers
FaCT: Faithful Concept Traces for Explaining Neural Network Decisions
Machine Learning (CS)
Explains how computer "brains" understand pictures.
Unboxing the Black Box: Mechanistic Interpretability for Algorithmic Understanding of Neural Networks
Machine Learning (CS)
Explains how computer brains make decisions.
Toward Faithfulness-guided Ensemble Interpretation of Neural Network
Machine Learning (CS)
Shows how computer brains make decisions clearly.