Reverse Personalization
By: Han-Wei Kung, Tuomas Varanka, Nicu Sebe
Potential Business Impact:
Changes faces in pictures without text.
Recent text-to-image diffusion models have demonstrated remarkable generation of realistic facial images conditioned on textual prompts and human identities, enabling creating personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process and introduce a reverse personalization framework for face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without using text prompts. To generalize beyond subjects in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack control over facial attributes, our framework supports attribute-controllable anonymization. We demonstrate that our method achieves a state-of-the-art balance between identity removal, attribute preservation, and image quality. Source code and data are available at https://github.com/hanweikung/reverse-personalization .
Similar Papers
Controllable Localized Face Anonymization Via Diffusion Inpainting
CV and Pattern Recognition
Hides faces in pictures while keeping them useful.
Zero-shot Face Editing via ID-Attribute Decoupled Inversion
CV and Pattern Recognition
Changes faces in pictures using just words.
A Dual-stage Prompt-driven Privacy-preserving Paradigm for Person Re-Identification
CV and Pattern Recognition
Creates fake people pictures for safer computer training.