Learning User Preferences for Image Generation Model
By: Wenyi Mo , Ying Ba , Tianyu Zhang and more
Potential Business Impact:
Predicts what you'll like to see next.
User preference prediction requires a comprehensive and accurate understanding of individual tastes. This includes both surface-level attributes, such as color and style, and deeper content-related aspects, such as themes and composition. However, existing methods typically rely on general human preferences or assume static user profiles, often neglecting individual variability and the dynamic, multifaceted nature of personal taste. To address these limitations, we propose an approach built upon Multimodal Large Language Models, introducing contrastive preference loss and preference tokens to learn personalized user preferences from historical interactions. The contrastive preference loss is designed to effectively distinguish between user ''likes'' and ''dislikes'', while the learnable preference tokens capture shared interest representations among existing users, enabling the model to activate group-specific preferences and enhance consistency across similar users. Extensive experiments demonstrate our model outperforms other methods in preference prediction accuracy, effectively identifying users with similar aesthetic inclinations and providing more precise guidance for generating images that align with individual tastes. The project page is \texttt{https://learn-user-pref.github.io/}.
Similar Papers
DesignPref: Capturing Personal Preferences in Visual Design Generation
CV and Pattern Recognition
Makes AI understand what *you* like in designs.
LiteraryTaste: A Preference Dataset for Creative Writing Personalization
Computation and Language
Teaches computers to write stories people like.
What Makes a Good Generated Image? Investigating Human and Multimodal LLM Image Preference Alignment
CV and Pattern Recognition
Helps AI understand what makes pictures look good.