Score: 2

Structural Inference: Interpreting Small Language Models with Susceptibilities

Published: April 25, 2025 | arXiv ID: 2504.18274v2

By: Garrett Baker , George Wang , Jesse Hoogland and more

Potential Business Impact:

Finds which words help computers understand text.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We develop a linear response framework for interpretability that treats a neural network as a Bayesian statistical mechanical system. A small perturbation of the data distribution, for example shifting the Pile toward GitHub or legal text, induces a first-order change in the posterior expectation of an observable localized on a chosen component of the network. The resulting susceptibility can be estimated efficiently with local SGLD samples and factorizes into signed, per-token contributions that serve as attribution scores. We combine these susceptibilities into a response matrix whose low-rank structure separates functional modules such as multigram and induction heads in a 3M-parameter transformer.


Page Count
52 pages

Category
Computer Science:
Machine Learning (CS)