Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policies
By: Yuqiao Tan , Minzheng Wang , Shizhu He and more
Potential Business Impact:
Makes AI think better by studying its brain.
Existing reinforcement learning (RL) approaches treat large language models (LLMs) as a single unified policy, overlooking their internal mechanisms. Understanding how policy evolves across layers and modules is therefore crucial for enabling more targeted optimization and raveling out complex reasoning mechanisms. In this paper, we decompose the language model policy by leveraging the intrinsic split of the Transformer residual stream and the equivalence between the composition of hidden states with the unembedding matrix and the resulting samplable policy. This decomposition reveals Internal Layer Policies, corresponding to contributions from individual layers, and Internal Modular Policies, which align with the self-attention and feed-forward network (FFN) components within each layer. By analyzing the entropy of internal policy, we find that: (a) Early layers keep high entropy for exploration, top layers converge to near-zero entropy for refinement, with convergence patterns varying across model series. (b) LLama's prediction space rapidly converges in the final layer, whereas Qwen-series models, especially Qwen3, exhibit a more human-like, progressively structured reasoning pattern. Motivated by these findings, we propose Bottom-up Policy Optimization (BuPO), a novel RL paradigm that directly optimizes the internal layer policy during early training. By aligning training objective at lower layer, BuPO reconstructs foundational reasoning capabilities and achieves superior performance. Extensive experiments on complex reasoning benchmarks demonstrates the effectiveness of our method. Our code is available at https://github.com/Trae1ounG/BuPO.
Similar Papers
Bootstrapping LLMs via Preference-Based Policy Optimization
Artificial Intelligence
Teaches AI to follow human wishes better.
Boundary-Guided Policy Optimization for Memory-efficient RL of Diffusion Large Language Models
Machine Learning (CS)
Makes AI better at math, code, and planning.
Reasoning in Diffusion Large Language Models is Concentrated in Dynamic Confusion Zones
Machine Learning (CS)
Teaches AI to learn better by focusing on tricky parts.