A Theoretical Lens for RL-Tuned Language Models via Energy-Based Models
By: Zhiquan Tan, Yinrong Hong
Large language models (LLMs) trained via KL-regularized reinforcement learning demonstrate strong instruction following, self-correction, and reasoning abilities. Yet their theoretical underpinnings remain limited. We exploit the closed-form energy-based model (EBM) structure of the optimal KL-regularized policy to provide a unified variational analysis of LLMs. For instruction-tuned models, under natural assumptions on reward potentials and pretraining symmetry, we prove that the transition kernel satisfies detailed balance with respect to a scalar potential encoding response quality. This yields monotonic KL convergence to a high-quality stationary distribution, bounded hitting times to superior states, and exponential mixing governed by the spectral gap. For reasoning models trained with verifiable rewards (RLVR), we show the objective is equivalent to expected KL minimization toward an optimal reasoning distribution, with the suboptimality gap reducing to the Bernoulli KL between target and current accuracies along the natural gradient flow. This helps explain empirical entropy-accuracy trade-offs.
Similar Papers
Autoregressive Language Models are Secretly Energy-Based Models: Insights into the Lookahead Capabilities of Next-Token Prediction
Machine Learning (CS)
Connects two AI learning methods for better planning.
Revisiting LLM Reasoning via Information Bottleneck
Artificial Intelligence
Makes computers think better at math problems.
Particle Dynamics for Latent-Variable Energy-Based Models
Machine Learning (CS)
Teaches computers to learn hidden patterns from data.