AI Generated Child Sexual Abuse Material -- What's the Harm?
By: Caoilte Ó Ciardha, John Buckley, Rebecca S. Portnoff
Potential Business Impact:
AI makes fake child abuse pictures, harming kids.
The development of generative artificial intelligence (AI) tools capable of producing wholly or partially synthetic child sexual abuse material (AI CSAM) presents profound challenges for child protection, law enforcement, and societal responses to child exploitation. While some argue that the harmfulness of AI CSAM differs fundamentally from other CSAM due to a perceived absence of direct victimization, this perspective fails to account for the range of risks associated with its production and consumption. AI has been implicated in the creation of synthetic CSAM of children who have not previously been abused, the revictimization of known survivors of abuse, the facilitation of grooming, coercion and sexual extortion, and the normalization of child sexual exploitation. Additionally, AI CSAM may serve as a new or enhanced pathway into offending by lowering barriers to engagement, desensitizing users to progressively extreme content, and undermining protective factors for individuals with a sexual interest in children. This paper provides a primer on some key technologies, critically examines the harms associated with AI CSAM, and cautions against claims that it may function as a harm reduction tool, emphasizing how some appeals to harmlessness obscure its real risks and may contribute to inertia in ecosystem responses.
Similar Papers
Unveiling AI's Threats to Child Protection: Regulatory efforts to Criminalize AI-Generated CSAM and Emerging Children's Rights Violations
Computers and Society
Finds how AI makes bad pictures of kids.
Evaluating Concept Filtering Defenses against Child Sexual Abuse Material Generation by Text-to-Image Models
Cryptography and Security
Filters can't stop AI from making bad child pictures.
Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement
Computers and Society
Helps stop AI from spreading lies.