Unveiling the Latent Directions of Reflection in Large Language Models
By: Fu-Chieh Chang, Yu-Ting Lee, Pei-Yuan Wu
Potential Business Impact:
Teaches computers to think better by checking their own work.
Reflection, the ability of large language models (LLMs) to evaluate and revise their own reasoning, has been widely used to improve performance on complex reasoning tasks. Yet, most prior work emphasizes designing reflective prompting strategies or reinforcement learning objectives, leaving the inner mechanisms of reflection underexplored. In this paper, we investigate reflection through the lens of latent directions in model activations. We propose a methodology based on activation steering to characterize how instructions with different reflective intentions: no reflection, intrinsic reflection, and triggered reflection. By constructing steering vectors between these reflection levels, we demonstrate that (1) new reflection-inducing instructions can be systematically identified, (2) reflective behavior can be directly enhanced or suppressed through activation interventions, and (3) suppressing reflection is considerably easier than stimulating it. Experiments on GSM8k-adv with Qwen2.5-3B and Gemma3-4B reveal clear stratification across reflection levels, and steering interventions confirm the controllability of reflection. Our findings highlight both opportunities (e.g., reflection-enhancing defenses) and risks (e.g., adversarial inhibition of reflection in jailbreak attacks). This work opens a path toward mechanistic understanding of reflective reasoning in LLMs.
Similar Papers
From Emergence to Control: Probing and Modulating Self-Reflection in Language Models
Machine Learning (CS)
Makes AI think again to solve problems better.
Instruct-of-Reflection: Enhancing Large Language Models Iterative Reflection Capabilities via Dynamic-Meta Instruction
Computation and Language
Makes AI smarter by teaching it to rethink its answers.
Illusions of reflection: open-ended task reveals systematic failures in Large Language Models' reflective reasoning
Artificial Intelligence
Computers don't learn from their own mistakes.