Score: 1

Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm

Published: May 7, 2025 | arXiv ID: 2505.04600v2

By: Laura Wagner, Eva Cetinic

Potential Business Impact:

Stops AI from making fake, harmful pictures.

Business Areas:
Personalization Commerce and Shopping

Open-source text-to-image (TTI) pipelines have become dominant in the landscape of AI-generated visual content, driven by technological advances that enable users to personalize models through adapters tailored to specific tasks. While personalization methods such as LoRA offer unprecedented creative opportunities, they also facilitate harmful practices, including the generation of non-consensual deepfakes and the amplification of misogynistic or hypersexualized content. This study presents an exploratory sociotechnical analysis of CivitAI, the most active platform for sharing and developing open-source TTI models. Drawing on a dataset of more than 40 million user-generated images and over 230,000 models, we find a disproportionate rise in not-safe-for-work (NSFW) content and a significant number of models intended to mimic real individuals. We also observe a strong influence of internet subcultures on the tools and practices shaping model personalizations and resulting visual media. In response to these findings, we contextualize the emergence of exploitative visual media through feminist and constructivist perspectives on technology, emphasizing how design choices and community dynamics shape platform outcomes. Building on this analysis, we propose interventions aimed at mitigating downstream harm, including improved content moderation, rethinking tool design, and establishing clearer platform policies to promote accountability and consent.

Country of Origin
🇨🇭 Switzerland

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Computers and Society