Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material
By: Anamaria Mojica-Hanke , Thomas Goger , Svenja Wölfel and more
Potential Business Impact:
AI can create illegal child abuse pictures.
The development of more powerful Generative Artificial Intelligence (GenAI) has expanded its capabilities and the variety of outputs. This has introduced significant legal challenges, including gray areas in various legal systems, such as the assessment of criminal liability for those responsible for these models. Therefore, we conducted a multidisciplinary study utilizing the statutory interpretation of relevant German laws, which, in conjunction with scenarios, provides a perspective on the different properties of GenAI in the context of Child Sexual Abuse Material (CSAM) generation. We found that generating CSAM with GenAI may have criminal and legal consequences not only for the user committing the primary offense but also for individuals responsible for the models, such as independent software developers, researchers, and company representatives. Additionally, the assessment of criminal liability may be affected by contextual and technical factors, including the type of generated image, content moderation policies, and the model's intended purpose. Based on our findings, we discussed the implications for different roles, as well as the requirements when developing such systems.
Similar Papers
AI Generated Child Sexual Abuse Material -- What's the Harm?
Computers and Society
AI makes fake child abuse pictures, harming kids.
Unveiling AI's Threats to Child Protection: Regulatory efforts to Criminalize AI-Generated CSAM and Emerging Children's Rights Violations
Computers and Society
Finds how AI makes bad pictures of kids.
Exposing the Impact of GenAI for Cybercrime: An Investigation into the Dark Side
Computers and Society
AI helps criminals commit more online crimes.