In Search of Grandmother Cells: Tracing Interpretable Neurons in Tabular Representations
By: Ricardo Knauer, Erik Rodner
Potential Business Impact:
Finds "idea cells" inside smart computer programs.
Foundation models are powerful yet often opaque in their decision-making. A topic of continued interest in both neuroscience and artificial intelligence is whether some neurons behave like grandmother cells, i.e., neurons that are inherently interpretable because they exclusively respond to single concepts. In this work, we propose two information-theoretic measures that quantify the neuronal saliency and selectivity for single concepts. We apply these metrics to the representations of TabPFN, a tabular foundation model, and perform a simple search across neuron-concept pairs to find the most salient and selective pair. Our analysis provides the first evidence that some neurons in such models show moderate, statistically significant saliency and selectivity for high-level concepts. These findings suggest that interpretable neurons can emerge naturally and that they can, in some cases, be identified without resorting to more complex interpretability techniques.
Similar Papers
XNNTab -- Interpretable Neural Networks for Tabular Data using Sparse Autoencoders
Machine Learning (CS)
Lets smart computers explain their decisions.
Faithful and Stable Neuron Explanations for Trustworthy Mechanistic Interpretability
Artificial Intelligence
Makes AI's "thinking" understandable and trustworthy.
Towards Interpretable Deep Neural Networks for Tabular Data
Machine Learning (CS)
Explains computer decisions made from data.