The Dead Salmons of AI Interpretability
By: Maxime Méloux , Giada Dirupo , François Portet and more
In a striking neuroscience study, the authors placed a dead salmon in an MRI scanner and showed it images of humans in social situations. Astonishingly, standard analyses of the time reported brain regions predictive of social emotions. The explanation, of course, was not supernatural cognition but a cautionary tale about misapplied statistical inference. In AI interpretability, reports of similar ''dead salmon'' artifacts abound: feature attribution, probing, sparse auto-encoding, and even causal analyses can produce plausible-looking explanations for randomly initialized neural networks. In this work, we examine this phenomenon and argue for a pragmatic statistical-causal reframing: explanations of computational systems should be treated as parameters of a (statistical) model, inferred from computational traces. This perspective goes beyond simply measuring statistical variability of explanations due to finite sampling of input data; interpretability methods become statistical estimators, and findings should be tested against explicit and meaningful alternative computational hypotheses, with uncertainty quantified with respect to the postulated statistical model. It also highlights important theoretical issues, such as the identifiability of common interpretability queries, which we argue is critical to understand the field's susceptibility to false discoveries, poor generalizability, and high variance. More broadly, situating interpretability within the standard toolkit of statistical inference opens promising avenues for future work aimed at turning AI interpretability into a pragmatic and rigorous science.
Similar Papers
Unboxing the Black Box: Mechanistic Interpretability for Algorithmic Understanding of Neural Networks
Machine Learning (CS)
Explains how computer brains make decisions.
Beyond Satisfaction: From Placebic to Actionable Explanations For Enhanced Understandability
Human-Computer Interaction
Helps computers show why they make decisions.
Interpretability as Alignment: Making Internal Understanding a Design Principle
Machine Learning (CS)
Makes AI understandable and safe for people.