Multimodal Integration Challenges in Emotionally Expressive Child Avatars for Training Applications
By: Pegah Salehi , Sajad Amouei Sheshkal , Vajira Thambawita and more
Potential Business Impact:
Makes computer faces show feelings from voices.
Dynamic facial emotion is essential for believable AI-generated avatars, yet most systems remain visually static, limiting their use in simulations like virtual training for investigative interviews with abused children. We present a real-time architecture combining Unreal Engine 5 MetaHuman rendering with NVIDIA Omniverse Audio2Face to generate facial expressions from vocal prosody in photorealistic child avatars. Due to limited TTS options, both avatars were voiced using young adult female models from two systems to better fit character profiles, introducing a voice-age mismatch. This confound may affect audiovisual alignment. We used a two-PC setup to decouple speech generation from GPU-intensive rendering, enabling low-latency interaction in desktop and VR. A between-subjects study (N=70) compared audio+visual vs. visual-only conditions as participants rated emotional clarity, facial realism, and empathy for avatars expressing joy, sadness, and anger. While emotions were generally recognized - especially sadness and joy - anger was harder to detect without audio, highlighting the role of voice in high-arousal expressions. Interestingly, silencing clips improved perceived realism by removing mismatches between voice and animation, especially when tone or age felt incongruent. These results emphasize the importance of audiovisual congruence: mismatched voice undermines expression, while a good match can enhance weaker visuals - posing challenges for emotionally coherent avatars in sensitive contexts.
Similar Papers
EAI-Avatar: Emotion-Aware Interactive Talking Head Generation
Audio and Speech Processing
Makes talking robots show real feelings.
Audio Driven Real-Time Facial Animation for Social Telepresence
Graphics
Makes virtual faces talk and move like real people.
Agent-Based Modular Learning for Multimodal Emotion Recognition in Human-Agent Systems
Machine Learning (CS)
Helps computers understand feelings from faces, voices, words.