IIB-LPO: Latent Policy Optimization via Iterative Information Bottleneck
By: Huilin Deng , Hongchen Luo , Yue Zhu and more
Potential Business Impact:
Helps computers solve math problems better.
Recent advances in Reinforcement Learning with Verifiable Rewards (RLVR) for Large Language Model (LLM) reasoning have been hindered by a persistent challenge: exploration collapse. The semantic homogeneity of random rollouts often traps models in narrow, over-optimized behaviors. While existing methods leverage policy entropy to encourage exploration, they face inherent limitations. Global entropy regularization is susceptible to reward hacking, which can induce meaningless verbosity, whereas local token-selective updates struggle with the strong inductive bias of pre-trained models. To address this, we propose Latent Policy Optimization via Iterative Information Bottleneck (IIB-LPO), a novel approach that shifts exploration from statistical perturbation of token distributions to topological branching of reasoning trajectories. IIB-LPO triggers latent branching at high-entropy states to diversify reasoning paths and employs the Information Bottleneck principle both as a trajectory filter and a self-reward mechanism, ensuring concise and informative exploration. Empirical results across four mathematical reasoning benchmarks demonstrate that IIB-LPO achieves state-of-the-art performance, surpassing prior methods by margins of up to 5.3% in accuracy and 7.4% in diversity metrics.
Similar Papers
Revisiting LLM Reasoning via Information Bottleneck
Artificial Intelligence
Makes computers think better at math problems.
Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies
Machine Learning (CS)
Makes AI think better by studying its brain.
Information-Theoretic Reward Modeling for Stable RLHF: Detecting and Mitigating Reward Hacking
Machine Learning (CS)
Stops AI from cheating to get good answers.