ROCM: RLHF on consistency models
By: Shivanshu Shekhar, Tong Zhang
Potential Business Impact:
Makes AI create better things faster.
Diffusion models have revolutionized generative modeling in continuous domains like image, audio, and video synthesis. However, their iterative sampling process leads to slow generation and inefficient training, challenges that are further exacerbated when incorporating Reinforcement Learning from Human Feedback (RLHF) due to sparse rewards and long time horizons. Consistency models address these issues by enabling single-step or efficient multi-step generation, significantly reducing computational costs. In this work, we propose a direct reward optimization framework for applying RLHF to consistency models, incorporating distributional regularization to enhance training stability and prevent reward hacking. We investigate various $f$-divergences as regularization strategies, striking a balance between reward maximization and model consistency. Unlike policy gradient methods, our approach leverages first-order gradients, making it more efficient and less sensitive to hyperparameter tuning. Empirical results show that our method achieves competitive or superior performance compared to policy gradient based RLHF methods, across various automatic metrics and human evaluation. Additionally, our analysis demonstrates the impact of different regularization techniques in improving model generalization and preventing overfitting.
Similar Papers
Distributionally Robust Reinforcement Learning with Human Feedback
Machine Learning (CS)
Makes AI smarter even with new, different questions.
Accelerating Diffusion Models in Offline RL via Reward-Aware Consistency Trajectory Distillation
Machine Learning (CS)
Makes AI learn faster and better for games.
A Unified Pairwise Framework for RLHF: Bridging Generative Reward Modeling and Policy Optimization
Machine Learning (CS)
Makes AI understand what people want better.