Score: 1

Training Language Models to Explain Their Own Computations

Published: November 11, 2025 | arXiv ID: 2511.08579v1

By: Belinda Z. Li , Zifan Carl Guo , Vincent Huang and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

Lets computers explain how they think.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Can language models (LMs) learn to faithfully describe their internal computations? Are they better able to describe themselves than other models? We study the extent to which LMs' privileged access to their own internals can be leveraged to produce new techniques for explaining their behavior. Using existing interpretability techniques as a source of ground truth, we fine-tune LMs to generate natural language descriptions of (1) the information encoded by LM features, (2) the causal structure of LMs' internal activations, and (3) the influence of specific input tokens on LM outputs. When trained with only tens of thousands of example explanations, explainer models exhibit non-trivial generalization to new queries. This generalization appears partly attributable to explainer models' privileged access to their own internals: using a model to explain its own computations generally works better than using a *different* model to explain its computations (even if the other model is significantly more capable). Our results suggest not only that LMs can learn to reliably explain their internal computations, but that such explanations offer a scalable complement to existing interpretability methods.

Country of Origin
🇺🇸 United States

Page Count
33 pages

Category
Computer Science:
Computation and Language