On the Importance of Conditioning for Privacy-Preserving Data Augmentation
By: Julian Lorenz , Katja Ludwig , Valentin Haug and more
Potential Business Impact:
Makes fake pictures that trick people-finding tools.
Latent diffusion models can be used as a powerful augmentation method to artificially extend datasets for enhanced training. To the human eye, these augmented images look very different to the originals. Previous work has suggested to use this data augmentation technique for data anonymization. However, we show that latent diffusion models that are conditioned on features like depth maps or edges to guide the diffusion process are not suitable as a privacy preserving method. We use a contrastive learning approach to train a model that can correctly identify people out of a pool of candidates. Moreover, we demonstrate that anonymization using conditioned diffusion models is susceptible to black box attacks. We attribute the success of the described methods to the conditioning of the latent diffusion model in the anonymization process. The diffusion model is instructed to produce similar edges for the anonymized images. Hence, a model can learn to recognize these patterns for identification.
Similar Papers
Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
Cryptography and Security
Steals private pictures from shared computer learning.
Enhancing Facial Privacy Protection via Weakening Diffusion Purification
CV and Pattern Recognition
Makes your face unreadable to spy cameras.
Controllable Localized Face Anonymization Via Diffusion Inpainting
CV and Pattern Recognition
Hides faces in pictures while keeping them useful.