A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms
By: Yingru Li, Ziniu Li, Jiacai Liu
Potential Business Impact:
Teaches computers to learn from examples and rewards.
We present a unified framework for Large Language Model (LLM) fine-tuning that integrates Imitation Learning and Reinforcement Learning. By analyzing the gradient of a composite objective combining trajectory-level KL divergence with task rewards, we derive a natural decomposition into two components: (1) an analytically computable Dense Gradient for token-level imitation, and (2) a Monte Carlo estimated Sparse Gradient for long-horizon reward optimization. The Dense Gradient admits a closed-form logit-level formula, enabling efficient GPU implementation.
Similar Papers
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Makes AI learn better and faster from mistakes.
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Makes AI learn better and faster.
Stabilizing Reinforcement Learning with LLMs: Formulation and Practices
Machine Learning (CS)
Teaches AI to learn better and faster.