Autoregressive Language Models are Secretly Energy-Based Models: Insights into the Lookahead Capabilities of Next-Token Prediction
By: Mathieu Blondel , Michael E. Sander , Germain Vivier-Ardisson and more
Autoregressive models (ARMs) currently constitute the dominant paradigm for large language models (LLMs). Energy-based models (EBMs) represent another class of models, which have historically been less prevalent in LLM development, yet naturally characterize the optimal policy in post-training alignment. In this paper, we provide a unified view of these two model classes. Taking the chain rule of probability as a starting point, we establish an explicit bijection between ARMs and EBMs in function space, which we show to correspond to a special case of the soft Bellman equation in maximum entropy reinforcement learning. Building upon this bijection, we derive the equivalence between supervised learning of ARMs and EBMs. Furthermore, we analyze the distillation of EBMs into ARMs by providing theoretical error bounds. Our results provide insights into the ability of ARMs to plan ahead, despite being based on the next-token prediction paradigm.
Similar Papers
Energy-Based Reward Models for Robust Language Model Alignment
Computation and Language
Makes AI understand what people really want.
Beyond Next-Token Prediction: A Performance Characterization of Diffusion versus Autoregressive Language Models
Machine Learning (CS)
Makes computers write faster and understand longer stories.
Closed-Loop Transformers: Autoregressive Modeling as Iterative Latent Equilibrium
Machine Learning (CS)
Lets computers rethink answers for better accuracy.