Deepfakes on Demand: the rise of accessible non-consensual deepfake image generators
By: Will Hawkins, Chris Russell, Brent Mittelstadt
Potential Business Impact:
Makes fake pictures of people easily.
Advances in multimodal machine learning have made text-to-image (T2I) models increasingly accessible and popular. However, T2I models introduce risks such as the generation of non-consensual depictions of identifiable individuals, otherwise known as deepfakes. This paper presents an empirical study exploring the accessibility of deepfake model variants online. Through a metadata analysis of thousands of publicly downloadable model variants on two popular repositories, Hugging Face and Civitai, we demonstrate a huge rise in easily accessible deepfake models. Almost 35,000 examples of publicly downloadable deepfake model variants are identified, primarily hosted on Civitai. These deepfake models have been downloaded almost 15 million times since November 2022, with the models targeting a range of individuals from global celebrities to Instagram users with under 10,000 followers. Both Stable Diffusion and Flux models are used for the creation of deepfake models, with 96% of these targeting women and many signalling intent to generate non-consensual intimate imagery (NCII). Deepfake model variants are often created via the parameter-efficient fine-tuning technique known as low rank adaptation (LoRA), requiring as few as 20 images, 24GB VRAM, and 15 minutes of time, making this process widely accessible via consumer-grade computers. Despite these models violating the Terms of Service of hosting platforms, and regulation seeking to prevent dissemination, these results emphasise the pressing need for greater action to be taken against the creation of deepfakes and NCII.
Similar Papers
Perpetuating Misogyny with Generative AI: How Model Personalization Normalizes Gendered Harm
Computers and Society
Stops AI from making fake, harmful pictures.
What Exactly is a Deepfake?
Computers and Society
Makes fake videos help people learn and heal.
Comparative Analysis of Deepfake Detection Models: New Approaches and Perspectives
CV and Pattern Recognition
Finds fake videos to stop lies.