Score: 0

A Note on Hybrid Online Reinforcement and Imitation Learning for LLMs: Formulations and Algorithms

Published: December 28, 2025 | arXiv ID: 2512.23097v1

By: Yingru Li, Ziniu Li, Jiacai Liu

Potential Business Impact:

Teaches computers to learn from examples and rewards.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

We present a unified framework for Large Language Model (LLM) fine-tuning that integrates Imitation Learning and Reinforcement Learning. By analyzing the gradient of a composite objective combining trajectory-level KL divergence with task rewards, we derive a natural decomposition into two components: (1) an analytically computable Dense Gradient for token-level imitation, and (2) a Monte Carlo estimated Sparse Gradient for long-horizon reward optimization. The Dense Gradient admits a closed-form logit-level formula, enabling efficient GPU implementation.

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)