BabyFlow: 3D modeling of realistic and expressive infant faces
By: Antonia Alomar , Mireia Masias , Marius George Linguraru and more
Early detection of developmental disorders can be aided by analyzing infant craniofacial morphology, but modeling infant faces is challenging due to limited data and frequent spontaneous expressions. We introduce BabyFlow, a generative AI model that disentangles facial identity and expression, enabling independent control over both. Using normalizing flows, BabyFlow learns flexible, probabilistic representations that capture the complex, non-linear variability of expressive infant faces without restrictive linear assumptions. To address scarce and uncontrolled expressive data, we perform cross-age expression transfer, adapting expressions from adult 3D scans to enrich infant datasets with realistic and systematic expressive variants. As a result, BabyFlow improves 3D reconstruction accuracy, particularly in highly expressive regions such as the mouth, eyes, and nose, and supports synthesis and modification of infant expressions while preserving identity. Additionally, by integrating with diffusion models, BabyFlow generates high-fidelity 2D infant images with consistent 3D geometry, providing powerful tools for data augmentation and early facial analysis.
Similar Papers
Instant Expressive Gaussian Head Avatar via 3D-Aware Expression Distillation
CV and Pattern Recognition
Makes talking faces in 3D, fast and real.
Learning Disentangled Speech- and Expression-Driven Blendshapes for 3D Talking Face Animation
CV and Pattern Recognition
Makes computer faces show real feelings when talking.
A Hybrid Deep Learning Framework for Emotion Recognition in Children with Autism During NAO Robot-Mediated Interaction
CV and Pattern Recognition
Helps robots understand autistic children's feelings.