From Competition to Synergy: Unlocking Reinforcement Learning for Subject-Driven Image Generation
By: Ziwei Huang , Ying Shu , Hao Fang and more
Potential Business Impact:
Makes AI art keep faces while changing scenes.
Subject-driven image generation models face a fundamental trade-off between identity preservation (fidelity) and prompt adherence (editability). While online reinforcement learning (RL), specifically GPRO, offers a promising solution, we find that a naive application of GRPO leads to competitive degradation, as the simple linear aggregation of rewards with static weights causes conflicting gradient signals and a misalignment with the temporal dynamics of the diffusion process. To overcome these limitations, we propose Customized-GRPO, a novel framework featuring two key innovations: (i) Synergy-Aware Reward Shaping (SARS), a non-linear mechanism that explicitly penalizes conflicted reward signals and amplifies synergistic ones, providing a sharper and more decisive gradient. (ii) Time-Aware Dynamic Weighting (TDW), which aligns the optimization pressure with the model's temporal dynamics by prioritizing prompt-following in the early, identity preservation in the later. Extensive experiments demonstrate that our method significantly outperforms naive GRPO baselines, successfully mitigating competitive degradation. Our model achieves a superior balance, generating images that both preserve key identity features and accurately adhere to complex textual prompts.
Similar Papers
AR-GRPO: Training Autoregressive Image Generation Models via Reinforcement Learning
CV and Pattern Recognition
Makes AI create better, more realistic pictures.
Syn-GRPO: Self-Evolving Data Synthesis for MLLM Perception Reasoning
CV and Pattern Recognition
Makes AI better at understanding pictures and words.
Growing with the Generator: Self-paced GRPO for Video Generation
CV and Pattern Recognition
Makes AI videos better by learning as it goes.