Training Language Models to Explain Their Own Computations
By: Belinda Z. Li , Zifan Carl Guo , Vincent Huang and more
Potential Business Impact:
Lets computers explain how they think.
Can language models (LMs) learn to faithfully describe their internal computations? Are they better able to describe themselves than other models? We study the extent to which LMs' privileged access to their own internals can be leveraged to produce new techniques for explaining their behavior. Using existing interpretability techniques as a source of ground truth, we fine-tune LMs to generate natural language descriptions of (1) the information encoded by LM features, (2) the causal structure of LMs' internal activations, and (3) the influence of specific input tokens on LM outputs. When trained with only tens of thousands of example explanations, explainer models exhibit non-trivial generalization to new queries. This generalization appears partly attributable to explainer models' privileged access to their own internals: using a model to explain its own computations generally works better than using a *different* model to explain its computations (even if the other model is significantly more capable). Our results suggest not only that LMs can learn to reliably explain their internal computations, but that such explanations offer a scalable complement to existing interpretability methods.
Similar Papers
Explainability of Large Language Models: Opportunities and Challenges toward Generating Trustworthy Explanations
Computation and Language
Helps us understand how AI makes its choices.
Can LLMs Faithfully Explain Themselves in Low-Resource Languages? A Case Study on Emotion Detection in Persian
Computation and Language
Makes AI explain its thoughts more honestly.
On the Notion that Language Models Reason
Computation and Language
Computers learn by copying patterns, not thinking.