DiverseGRPO: Mitigating Mode Collapse in Image Generation via Diversity-Aware GRPO
By: Henglin Liu , Huijuan Huang , Jing Wang and more
Potential Business Impact:
Makes AI art more creative and varied.
Reinforcement learning (RL), particularly GRPO, improves image generation quality significantly by comparing the relative performance of images generated within the same group. However, in the later stages of training, the model tends to produce homogenized outputs, lacking creativity and visual diversity, which restricts its application scenarios. This issue can be analyzed from both reward modeling and generation dynamics perspectives. First, traditional GRPO relies on single-sample quality as the reward signal, driving the model to converge toward a few high-reward generation modes while neglecting distribution-level diversity. Second, conventional GRPO regularization neglects the dominant role of early-stage denoising in preserving diversity, causing a misaligned regularization budget that limits the achievable quality--diversity trade-off. Motivated by these insights, we revisit the diversity degradation problem from both reward modeling and generation dynamics. At the reward level, we propose a distributional creativity bonus based on semantic grouping. Specifically, we construct a distribution-level representation via spectral clustering over samples generated from the same caption, and adaptively allocate exploratory rewards according to group sizes to encourage the discovery of novel visual modes. At the generation level, we introduce a structure-aware regularization, which enforces stronger early-stage constraints to preserve diversity without compromising reward optimization efficiency. Experiments demonstrate that our method achieves a 13\%--18\% improvement in semantic diversity under matched quality scores, establishing a new Pareto frontier between image quality and diversity for GRPO-based image generation.
Similar Papers
Growing with the Generator: Self-paced GRPO for Video Generation
CV and Pattern Recognition
Makes AI videos better by learning as it goes.
Diverse Video Generation with Determinantal Point Process-Guided Policy Optimization
CV and Pattern Recognition
Makes AI create many different videos from one idea.
Syn-GRPO: Self-Evolving Data Synthesis for MLLM Perception Reasoning
CV and Pattern Recognition
Makes AI better at understanding pictures and words.