On the Effect of Uncertainty on Layer-wise Inference Dynamics
By: Sunwoo Kim, Haneul Yoo, Alice Oh
Potential Business Impact:
Helps AI know when it's unsure.
Understanding how large language models (LLMs) internally represent and process their predictions is central to detecting uncertainty and preventing hallucinations. While several studies have shown that models encode uncertainty in their hidden states, it is underexplored how this affects the way they process such hidden states. In this work, we demonstrate that the dynamics of output token probabilities across layers for certain and uncertain outputs are largely aligned, revealing that uncertainty does not seem to affect inference dynamics. Specifically, we use the Tuned Lens, a variant of the Logit Lens, to analyze the layer-wise probability trajectories of final prediction tokens across 11 datasets and 5 models. Using incorrect predictions as those with higher epistemic uncertainty, our results show aligned trajectories for certain and uncertain predictions that both observe abrupt increases in confidence at similar layers. We balance this finding by showing evidence that more competent models may learn to process uncertainty differently. Our findings challenge the feasibility of leveraging simplistic methods for detecting uncertainty at inference. More broadly, our work demonstrates how interpretability methods may be used to investigate the way uncertainty affects inference.
Similar Papers
Unraveling Token Prediction Refinement and Identifying Essential Layers in Language Models
Computation and Language
Helps computers understand information better.
Estimating LLM Uncertainty with Evidence
Computation and Language
Helps computers know when they are wrong.
Pretrained LLMs Learn Multiple Types of Uncertainty
Computation and Language
Makes AI know when it's unsure.