FastFace: Tuning Identity Preservation in Distilled Diffusion via Guidance and Attention
By: Sergey Karpukhin , Vadim Titov , Andrey Kuznetsov and more
Potential Business Impact:
Makes AI art generators create faces faster.
In latest years plethora of identity-preserving adapters for a personalized generation with diffusion models have been released. Their main disadvantage is that they are dominantly trained jointly with base diffusion models, which suffer from slow multi-step inference. This work aims to tackle the challenge of training-free adaptation of pretrained ID-adapters to diffusion models accelerated via distillation - through careful re-design of classifier-free guidance for few-step stylistic generation and attention manipulation mechanisms in decoupled blocks to improve identity similarity and fidelity, we propose universal FastFace framework. Additionally, we develop a disentangled public evaluation protocol for id-preserving adapters.
Similar Papers
IP-FaceDiff: Identity-Preserving Facial Video Editing with Diffusion
CV and Pattern Recognition
Changes faces in videos with text commands.
Training-Free Identity Preservation in Stylized Image Generation Using Diffusion Models
CV and Pattern Recognition
Keeps faces the same when changing picture styles.
Training for Identity, Inference for Controllability: A Unified Approach to Tuning-Free Face Personalization
CV and Pattern Recognition
Makes AI create faces that look like real people.