Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion
By: Yuxi Mi , Zhizhou Zhong , Yuge Huang and more
Potential Business Impact:
Creates fake faces that fool face recognition.
Identity-preserving face synthesis aims to generate synthetic face images of virtual subjects that can substitute real-world data for training face recognition models. While prior arts strive to create images with consistent identities and diverse styles, they face a trade-off between them. Identifying their limitation of treating style variation as subject-agnostic and observing that real-world persons actually have distinct, subject-specific styles, this paper introduces MorphFace, a diffusion-based face generator. The generator learns fine-grained facial styles, e.g., shape, pose and expression, from the renderings of a 3D morphable model (3DMM). It also learns identities from an off-the-shelf recognition model. To create virtual faces, the generator is conditioned on novel identities of unlabeled synthetic faces, and novel styles that are statistically sampled from a real-world prior distribution. The sampling especially accounts for both intra-subject variation and subject distinctiveness. A context blending strategy is employed to enhance the generator's responsiveness to identity and style conditions. Extensive experiments show that MorphFace outperforms the best prior arts in face recognition efficacy.
Similar Papers
Training-Free Identity Preservation in Stylized Image Generation Using Diffusion Models
CV and Pattern Recognition
Keeps faces the same when changing picture styles.
Bringing Diversity from Diffusion Models to Semantic-Guided Face Asset Generation
CV and Pattern Recognition
Creates and changes digital faces with more control.
StyleMM: Stylized 3D Morphable Face Model via Text-Driven Aligned Image Translation
Graphics
Makes 3D faces look like any art style.