Aesthetic Alignment Risks Assimilation: How Image Generation and Reward Models Reinforce Beauty Bias and Ideological "Censorship"
By: Wenqi Marshall Guo , Qingyun Qian , Khalad Hasan and more
Potential Business Impact:
AI makes art you want, not just pretty art.
Over-aligning image generation models to a generalized aesthetic preference conflicts with user intent, particularly when ``anti-aesthetic" outputs are requested for artistic or critical purposes. This adherence prioritizes developer-centered values, compromising user autonomy and aesthetic pluralism. We test this bias by constructing a wide-spectrum aesthetics dataset and evaluating state-of-the-art generation and reward models. We find that aesthetic-aligned generation models frequently default to conventionally beautiful outputs, failing to respect instructions for low-quality or negative imagery. Crucially, reward models penalize anti-aesthetic images even when they perfectly match the explicit user prompt. We confirm this systemic bias through image-to-image editing and evaluation against real abstract artworks.
Similar Papers
Erasing 'Ugly' from the Internet: Propagation of the Beauty Myth in Text-Image Models
CV and Pattern Recognition
AI makes fake pictures that look too perfect.
Erasing 'Ugly' from the Internet: Propagation of the Beauty Myth in Text-Image Models
CV and Pattern Recognition
AI makes fake pictures that look too perfect.
AesBiasBench: Evaluating Bias and Alignment in Multimodal Language Models for Personalized Image Aesthetic Assessment
Computation and Language
Finds if AI unfairly judges pictures based on who made them.