StyleMM: Stylized 3D Morphable Face Model via Text-Driven Aligned Image Translation
By: Seungmi Lee, Kwan Yun, Junyong Noh
Potential Business Impact:
Makes 3D faces look like any art style.
We introduce StyleMM, a novel framework that can construct a stylized 3D Morphable Model (3DMM) based on user-defined text descriptions specifying a target style. Building upon a pre-trained mesh deformation network and a texture generator for original 3DMM-based realistic human faces, our approach fine-tunes these models using stylized facial images generated via text-guided image-to-image (i2i) translation with a diffusion model, which serve as stylization targets for the rendered mesh. To prevent undesired changes in identity, facial alignment, or expressions during i2i translation, we introduce a stylization method that explicitly preserves the facial attributes of the source image. By maintaining these critical attributes during image stylization, the proposed approach ensures consistent 3D style transfer across the 3DMM parameter space through image-based training. Once trained, StyleMM enables feed-forward generation of stylized face meshes with explicit control over shape, expression, and texture parameters, producing meshes with consistent vertex connectivity and animatability. Quantitative and qualitative evaluations demonstrate that our approach outperforms state-of-the-art methods in terms of identity-level facial diversity and stylization capability. The code and videos are available at [kwanyun.github.io/stylemm_page](kwanyun.github.io/stylemm_page).
Similar Papers
StyleMorpheus: A Style-Based 3D-Aware Morphable Face Model
CV and Pattern Recognition
Creates realistic 3D faces from any photo.
Data Synthesis with Diverse Styles for Face Recognition via 3DMM-Guided Diffusion
CV and Pattern Recognition
Creates fake faces that fool face recognition.
Text-based Animatable 3D Avatars with Morphable Model Alignment
CV and Pattern Recognition
Creates realistic talking 3D heads from text.