DiffUMI: Training-Free Universal Model Inversion via Unconditional Diffusion for Face Recognition
By: Hanrui Wang , Shuo Wang , Chun-Shien Lu and more
Potential Business Impact:
Reconstructs faces from hidden digital data.
Face recognition technology presents serious privacy risks due to its reliance on sensitive and immutable biometric data. To address these concerns, such systems typically convert raw facial images into embeddings, which are traditionally viewed as privacy-preserving. However, model inversion attacks challenge this assumption by reconstructing private facial images from embeddings, highlighting a critical vulnerability in face recognition systems. Most existing inversion methods require training a separate generator for each target model, making them computationally intensive. In this work, we introduce DiffUMI, a diffusion-based universal model inversion attack that requires no additional training. DiffUMI is the first approach to successfully leverage unconditional face generation without relying on model-specific generators. It surpasses state-of-the-art attacks by 15.5% and 9.82% in success rate on standard and privacy-preserving face recognition systems, respectively. Furthermore, we propose a novel use of out-of-domain detection (OODD), demonstrating for the first time that model inversion can differentiate between facial and non-facial embeddings using only the embedding space.
Similar Papers
Diffusion-based Adversarial Identity Manipulation for Facial Privacy Protection
CV and Pattern Recognition
Makes faces look different to stop tracking.
Enhancing Facial Privacy Protection via Weakening Diffusion Purification
CV and Pattern Recognition
Makes your face unreadable to spy cameras.
Training for Identity, Inference for Controllability: A Unified Approach to Tuning-Free Face Personalization
CV and Pattern Recognition
Makes AI create faces that look like real people.