Unsupervised decoding of encoded reasoning using language model interpretability
By: Ching Fang, Samuel Marks
Potential Business Impact:
Uncovers how AI thinks, even when hidden.
As large language models become increasingly capable, there is growing concern that they may develop reasoning processes that are encoded or hidden from human oversight. To investigate whether current interpretability techniques can penetrate such encoded reasoning, we construct a controlled testbed by fine-tuning a reasoning model (DeepSeek-R1-Distill-Llama-70B) to perform chain-of-thought reasoning in ROT-13 encryption while maintaining intelligible English outputs. We evaluate mechanistic interpretability methods--in particular, logit lens analysis--on their ability to decode the model's hidden reasoning process using only internal activations. We show that logit lens can effectively translate encoded reasoning, with accuracy peaking in intermediate-to-late layers. Finally, we develop a fully unsupervised decoding pipeline that combines logit lens with automated paraphrasing, achieving substantial accuracy in reconstructing complete reasoning transcripts from internal model representations. These findings suggest that current mechanistic interpretability techniques may be more robust to simple forms of encoded reasoning than previously understood. Our work provides an initial framework for evaluating interpretability methods against models that reason in non-human-readable formats, contributing to the broader challenge of maintaining oversight over increasingly capable AI systems.
Similar Papers
Reasoning-Intensive Regression
Computation and Language
Helps computers find hidden numbers in text.
Lightweight Latent Reasoning for Narrative Tasks
Computation and Language
Makes AI think faster and use less power.
Beyond Transcription: Mechanistic Interpretability in ASR
Sound
Helps computers understand speech better by seeing inside.