Multiple Token Divergence: Measuring and Steering In-Context Computation Density
By: Vincent Herrmann , Eric Alcaide , Michael Wand and more
Measuring the in-context computational effort of language models is a key challenge, as metrics like next-token loss fail to capture reasoning complexity. Prior methods based on latent state compressibility can be invasive and unstable. We propose Multiple Token Divergence (MTD), a simple measure of computational effort defined as the KL divergence between a model's full output distribution and that of a shallow, auxiliary prediction head. MTD can be computed directly from pre-trained models with multiple prediction heads, requiring no additional training. Building on this, we introduce Divergence Steering, a novel decoding method to control the computational character of generated text. We empirically show that MTD is more effective than prior methods at distinguishing complex tasks from simple ones. On mathematical reasoning benchmarks, MTD correlates positively with problem difficulty. Lower MTD is associated with more accurate reasoning. MTD provides a practical, lightweight tool for analyzing and steering the computational dynamics of language models.
Similar Papers
Direct Multi-Token Decoding
Computation and Language
Makes AI write faster by skipping some steps.
ToDi: Token-wise Distillation via Fine-Grained Divergence Control
Computation and Language
Makes big AI models work on small devices.
Reasoning Path Divergence: A New Metric and Curation Strategy to Unlock LLM Diverse Thinking
Computation and Language
Teaches computers many ways to solve one problem.