Revisiting Judge Decoding from First Principles via Training-Free Distributional Divergence
By: Shengyin Sun , Yiming Li , Renxi Liu and more
Potential Business Impact:
Makes AI answer questions much faster and cheaper.
Judge Decoding accelerates LLM inference by relaxing the strict verification of Speculative Decoding, yet it typically relies on expensive and noisy supervision. In this work, we revisit this paradigm from first principles, revealing that the ``criticality'' scores learned via costly supervision are intrinsically encoded in the draft-target distributional divergence. We theoretically prove a structural correspondence between learned linear judges and Kullback-Leibler (KL) divergence, demonstrating they rely on the same underlying logit primitives. Guided by this, we propose a simple, training-free verification mechanism based on KL divergence. Extensive experiments across reasoning and coding benchmarks show that our method matches or outperforms complex trained judges (e.g., AutoJudge), offering superior robustness to domain shifts and eliminating the supervision bottleneck entirely.
Similar Papers
Beyond Single-Point Judgment: Distribution Alignment for LLM-as-a-Judge
Artificial Intelligence
Makes AI judges understand opinions better.
Demystifying LLM-as-a-Judge: Analytically Tractable Model for Inference-Time Scaling
Machine Learning (CS)
Makes AI better by trying more answers.
Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment
Machine Learning (CS)
Makes AI talk faster by judging answers better.