TempFlow-GRPO: When Timing Matters for GRPO in Flow Models
By: Xiaoxuan He , Siming Fu , Yuke Zhao and more
Potential Business Impact:
Makes AI pictures better match what you want.
Recent flow matching models for text-to-image generation have achieved remarkable quality, yet their integration with reinforcement learning for human preference alignment remains suboptimal, hindering fine-grained reward-based optimization. We observe that the key impediment to effective GRPO training of flow models is the temporal uniformity assumption in existing approaches: sparse terminal rewards with uniform credit assignment fail to capture the varying criticality of decisions across generation timesteps, resulting in inefficient exploration and suboptimal convergence. To remedy this shortcoming, we introduce \textbf{TempFlow-GRPO} (Temporal Flow GRPO), a principled GRPO framework that captures and exploits the temporal structure inherent in flow-based generation. TempFlow-GRPO introduces three key innovations: (i) a trajectory branching mechanism that provides process rewards by concentrating stochasticity at designated branching points, enabling precise credit assignment without requiring specialized intermediate reward models; (ii) a noise-aware weighting scheme that modulates policy optimization according to the intrinsic exploration potential of each timestep, prioritizing learning during high-impact early stages while ensuring stable refinement in later phases; and (iii) a seed group strategy that controls for initialization effects to isolate exploration contributions. These innovations endow the model with temporally-aware optimization that respects the underlying generative dynamics, leading to state-of-the-art performance in human preference alignment and text-to-image benchmarks.
Similar Papers
TempFlow-GRPO: When Timing Matters for GRPO in Flow Models
CV and Pattern Recognition
Makes AI art match what people want better.
Anchoring Values in Temporal and Group Dimensions for Flow Matching Model Alignment
Machine Learning (CS)
Makes AI draw better pictures by fixing mistakes.
Growing with the Generator: Self-paced GRPO for Video Generation
CV and Pattern Recognition
Makes AI videos better by learning as it goes.