MaskFocus: Focusing Policy Optimization on Critical Steps for Masked Image Generation
By: Guohui Zhang , Hu Yu , Xiaoxiao Ma and more
Reinforcement learning (RL) has demonstrated significant potential for post-training language models and autoregressive visual generative models, but adapting RL to masked generative models remains challenging. The core factor is that policy optimization requires accounting for the probability likelihood of each step due to its multi-step and iterative refinement process. This reliance on entire sampling trajectories introduces high computational cost, whereas natively optimizing random steps often yields suboptimal results. In this paper, we present MaskFocus, a novel RL framework that achieves effective policy optimization for masked generative models by focusing on critical steps. Specifically, we determine the step-level information gain by measuring the similarity between the intermediate images at each sampling step and the final generated image. Crucially, we leverage this to identify the most critical and valuable steps and execute focused policy optimization on them. Furthermore, we design a dynamic routing sampling mechanism based on entropy to encourage the model to explore more valuable masking strategies for samples with low entropy. Extensive experiments on multiple Text-to-Image benchmarks validate the effectiveness of our method.
Similar Papers
Reinforcement Learning Meets Masked Generative Models: Mask-GRPO for Text-to-Image Generation
CV and Pattern Recognition
Makes AI create better pictures from words.
Accelerating Inference of Masked Image Generators via Reinforcement Learning
CV and Pattern Recognition
Makes AI draw better pictures much faster.
Masked Generative Policy for Robotic Control
Robotics
Robots learn to do tasks faster and better.