MM-R1: Unleashing the Power of Unified Multimodal Large Language Models for Personalized Image Generation
By: Qian Liang , Yujia Wu , Kuncheng Li and more
Potential Business Impact:
Creates personalized pictures from your descriptions.
Multimodal Large Language Models (MLLMs) with unified architectures excel across a wide range of vision-language tasks, yet aligning them with personalized image generation remains a significant challenge. Existing methods for MLLMs are frequently subject-specific, demanding a data-intensive fine-tuning process for every new subject, which limits their scalability. In this paper, we introduce MM-R1, a framework that integrates a cross-modal Chain-of-Thought (X-CoT) reasoning strategy to unlock the inherent potential of unified MLLMs for personalized image generation. Specifically, we structure personalization as an integrated visual reasoning and generation process: (1) grounding subject concepts by interpreting and understanding user-provided images and contextual cues, and (2) generating personalized images conditioned on both the extracted subject representations and user prompts. To further enhance the reasoning capability, we adopt Grouped Reward Proximal Policy Optimization (GRPO) to explicitly align the generation. Experiments demonstrate that MM-R1 unleashes the personalization capability of unified MLLMs to generate images with high subject fidelity and strong text alignment in a zero-shot manner.
Similar Papers
MM-R1: Unleashing the Power of Unified Multimodal Large Language Models for Personalized Image Generation
CV and Pattern Recognition
Makes computers create personalized pictures from your words.
A Survey of Generative Categories and Techniques in Multimodal Large Language Models
Multimedia
Computers can now create pictures, music, and videos.
Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models
CV and Pattern Recognition
Teaches computers to solve math problems better.