Data-regularized Reinforcement Learning for Diffusion Models at Scale
By: Haotian Ye , Kaiwen Zheng , Jiashu Xu and more
Potential Business Impact:
Makes AI create better videos that people like.
Aligning generative diffusion models with human preferences via reinforcement learning (RL) is critical yet challenging. Most existing algorithms are often vulnerable to reward hacking, such as quality degradation, over-stylization, or reduced diversity. Our analysis demonstrates that this can be attributed to the inherent limitations of their regularization, which provides unreliable penalties. We introduce Data-regularized Diffusion Reinforcement Learning (DDRL), a novel framework that uses the forward KL divergence to anchor the policy to an off-policy data distribution. Theoretically, DDRL enables robust, unbiased integration of RL with standard diffusion training. Empirically, this translates into a simple yet effective algorithm that combines reward maximization with diffusion loss minimization. With over a million GPU hours of experiments and ten thousand double-blind human evaluations, we demonstrate on high-resolution video generation tasks that DDRL significantly improves rewards while alleviating the reward hacking seen in baselines, achieving the highest human preference and establishing a robust and scalable paradigm for diffusion post-training.
Similar Papers
Beyond Human Demonstrations: Diffusion-Based Reinforcement Learning to Generate Data for VLA Training
Robotics
Teaches robots to do many tasks better.
Goal-Driven Reward by Video Diffusion Models for Reinforcement Learning
Machine Learning (CS)
Teaches robots goals using movie clips.
Inference-Time Alignment Control for Diffusion Models with Reinforcement Learning Guidance
Machine Learning (CS)
Makes AI art better match what you want.