Score: 2

Revisiting Judge Decoding from First Principles via Training-Free Distributional Divergence

Published: January 8, 2026 | arXiv ID: 2601.04766v1

By: Shengyin Sun , Yiming Li , Renxi Liu and more

BigTech Affiliations: Huawei

Potential Business Impact:

Makes AI answer questions much faster and cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Judge Decoding accelerates LLM inference by relaxing the strict verification of Speculative Decoding, yet it typically relies on expensive and noisy supervision. In this work, we revisit this paradigm from first principles, revealing that the ``criticality'' scores learned via costly supervision are intrinsically encoded in the draft-target distributional divergence. We theoretically prove a structural correspondence between learned linear judges and Kullback-Leibler (KL) divergence, demonstrating they rely on the same underlying logit primitives. Guided by this, we propose a simple, training-free verification mechanism based on KL divergence. Extensive experiments across reasoning and coding benchmarks show that our method matches or outperforms complex trained judges (e.g., AutoJudge), offering superior robustness to domain shifts and eliminating the supervision bottleneck entirely.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡­πŸ‡° Hong Kong, China

Page Count
16 pages

Category
Computer Science:
Computation and Language