Zero-shot Face Editing via ID-Attribute Decoupled Inversion
By: Yang Hou, Minggu Wang, Jianjun Zhao
Potential Business Impact:
Changes faces in pictures using just words.
Recent advancements in text-guided diffusion models have shown promise for general image editing via inversion techniques, but often struggle to maintain ID and structural consistency in real face editing tasks. To address this limitation, we propose a zero-shot face editing method based on ID-Attribute Decoupled Inversion. Specifically, we decompose the face representation into ID and attribute features, using them as joint conditions to guide both the inversion and the reverse diffusion processes. This allows independent control over ID and attributes, ensuring strong ID preservation and structural consistency while enabling precise facial attribute manipulation. Our method supports a wide range of complex multi-attribute face editing tasks using only text prompts, without requiring region-specific input, and operates at a speed comparable to DDIM inversion. Comprehensive experiments demonstrate its practicality and effectiveness.
Similar Papers
Beyond Inference Intervention: Identity-Decoupled Diffusion for Face Anonymization
CV and Pattern Recognition
Makes faces look different but still real.
Efficient Few-shot Identity Preserving Attribute Editing for 3D-aware Deep Generative Models
CV and Pattern Recognition
Changes 3D faces with few pictures.
DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability
CV and Pattern Recognition
Creates many personalized faces from one photo.