Score: 1

ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion

Published: October 6, 2025 | arXiv ID: 2510.04706v1

By: Foivos Paraperas Papantoniou, Stefanos Zafeiriou

Potential Business Impact:

Changes faces to show any emotion perfectly.

Business Areas:
Facial Recognition Data and Analytics, Software

Human-centric generative models designed for AI-driven storytelling must bring together two core capabilities: identity consistency and precise control over human performance. While recent diffusion-based approaches have made significant progress in maintaining facial identity, achieving fine-grained expression control without compromising identity remains challenging. In this work, we present a diffusion-based framework that faithfully reimagines any subject under any particular facial expression. Building on an ID-consistent face foundation model, we adopt a compositional design featuring an expression cross-attention module guided by FLAME blendshape parameters for explicit control. Trained on a diverse mixture of image and video data rich in expressive variation, our adapter generalizes beyond basic emotions to subtle micro-expressions and expressive transitions, overlooked by prior works. In addition, a pluggable Reference Adapter enables expression editing in real images by transferring the appearance from a reference frame during synthesis. Extensive quantitative and qualitative evaluations show that our model outperforms existing methods in tailored and identity-consistent expression generation. Code and models can be found at https://github.com/foivospar/Arc2Face.

Country of Origin
🇬🇧 United Kingdom

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition