Seeing is Believing: Emotion-Aware Audio-Visual Language Modeling for Expressive Speech Generation
By: Weiting Tan , Jiachen Lian , Hirofumi Inaguma and more
Potential Business Impact:
Makes computers talk with real-life facial expressions.
We present an Audio-Visual Language Model (AVLM) for expressive speech generation by integrating full-face visual cues into a pre-trained expressive speech model. We explore multiple visual encoders and multimodal fusion strategies during pre-training to identify the most effective integration approach. Subsequent fine-tuning on emotion recognition and expressive dialogue tasks yields substantial gains over speech-only baselines (e.g., +5 F1 in emotion recognition). AVLM highlights the value of expressive visual information in guiding speech generation and offers a foundation for end-to-end multimodal conversational systems.
Similar Papers
Seeing is Believing: Emotion-Aware Audio-Visual Language Modeling for Expressive Speech Generation
Computation and Language
Makes computer voices sound more real.
AV-EMO-Reasoning: Benchmarking Emotional Reasoning Capabilities in Omni-modal LLMS with Audio-visual Cues
Multimedia
AI understands feelings better from voices and faces.
Contrastive Language-Image Learning with Augmented Textual Prompts for 3D/4D FER Using Vision-Language Model
CV and Pattern Recognition
Reads emotions from faces in 3D.