Score: 0

Predictive Concept Decoders: Training Scalable End-to-End Interpretability Assistants

Published: December 17, 2025 | arXiv ID: 2512.15712v1

By: Vincent Huang , Dami Choi , Daniel D. Johnson and more

Potential Business Impact:

Helps AI explain its own thinking.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Interpreting the internal activations of neural networks can produce more faithful explanations of their behavior, but is difficult due to the complex structure of activation space. Existing approaches to scalable interpretability use hand-designed agents that make and test hypotheses about how internal activations relate to external behavior. We propose to instead turn this task into an end-to-end training objective, by training interpretability assistants to accurately predict model behavior from activations through a communication bottleneck. Specifically, an encoder compresses activations to a sparse list of concepts, and a decoder reads this list and answers a natural language question about the model. We show how to pretrain this assistant on large unstructured data, then finetune it to answer questions. The resulting architecture, which we call a Predictive Concept Decoder, enjoys favorable scaling properties: the auto-interp score of the bottleneck concepts improves with data, as does the performance on downstream applications. Specifically, PCDs can detect jailbreaks, secret hints, and implanted latent concepts, and are able to accurately surface latent user attributes.

Page Count
28 pages

Category
Computer Science:
Artificial Intelligence