RMLer: Synthesizing Novel Objects across Diverse Categories via Reinforcement Mixing Learning
By: Jun Li , Zikun Chen , Haibo Chen and more
Novel object synthesis by integrating distinct textual concepts from diverse categories remains a significant challenge in Text-to-Image (T2I) generation. Existing methods often suffer from insufficient concept mixing, lack of rigorous evaluation, and suboptimal outputs-manifesting as conceptual imbalance, superficial combinations, or mere juxtapositions. To address these limitations, we propose Reinforcement Mixing Learning (RMLer), a framework that formulates cross-category concept fusion as a reinforcement learning problem: mixed features serve as states, mixing strategies as actions, and visual outcomes as rewards. Specifically, we design an MLP-policy network to predict dynamic coefficients for blending cross-category text embeddings. We further introduce visual rewards based on (1) semantic similarity and (2) compositional balance between the fused object and its constituent concepts, optimizing the policy via proximal policy optimization. At inference, a selection strategy leverages these rewards to curate the highest-quality fused objects. Extensive experiments demonstrate RMLer's superiority in synthesizing coherent, high-fidelity objects from diverse categories, outperforming existing methods. Our work provides a robust framework for generating novel visual concepts, with promising applications in film, gaming, and design.
Similar Papers
RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing
Artificial Intelligence
Helps AI write stories that are good and follow rules.
RLMR: Reinforcement Learning with Mixed Rewards for Creative Writing
Artificial Intelligence
Makes stories follow rules and be good.
A Survey of Generative Categories and Techniques in Multimodal Large Language Models
Multimedia
Computers can now create pictures, music, and videos.