Score: 1

DiffUMI: Training-Free Universal Model Inversion via Unconditional Diffusion for Face Recognition

Published: April 25, 2025 | arXiv ID: 2504.18015v2

By: Hanrui Wang , Shuo Wang , Chun-Shien Lu and more

Potential Business Impact:

Reconstructs faces from hidden digital data.

Business Areas:
Image Recognition Data and Analytics, Software

Face recognition technology presents serious privacy risks due to its reliance on sensitive and immutable biometric data. To address these concerns, such systems typically convert raw facial images into embeddings, which are traditionally viewed as privacy-preserving. However, model inversion attacks challenge this assumption by reconstructing private facial images from embeddings, highlighting a critical vulnerability in face recognition systems. Most existing inversion methods require training a separate generator for each target model, making them computationally intensive. In this work, we introduce DiffUMI, a diffusion-based universal model inversion attack that requires no additional training. DiffUMI is the first approach to successfully leverage unconditional face generation without relying on model-specific generators. It surpasses state-of-the-art attacks by 15.5% and 9.82% in success rate on standard and privacy-preserving face recognition systems, respectively. Furthermore, we propose a novel use of out-of-domain detection (OODD), demonstrating for the first time that model inversion can differentiate between facial and non-facial embeddings using only the embedding space.

Country of Origin
🇨🇳 China

Page Count
19 pages

Category
Computer Science:
Cryptography and Security