Generation of Real-time Robotic Emotional Expressions Learning from Human Demonstration in Mixed Reality
By: Chao Wang, Michael Gienger, Fan Zhang
Potential Business Impact:
Robots show feelings like humans do.
Expressive behaviors in robots are critical for effectively conveying their emotional states during interactions with humans. In this work, we present a framework that autonomously generates realistic and diverse robotic emotional expressions based on expert human demonstrations captured in Mixed Reality (MR). Our system enables experts to teleoperate a virtual robot from a first-person perspective, capturing their facial expressions, head movements, and upper-body gestures, and mapping these behaviors onto corresponding robotic components including eyes, ears, neck, and arms. Leveraging a flow-matching-based generative process, our model learns to produce coherent and varied behaviors in real-time in response to moving objects, conditioned explicitly on given emotional states. A preliminary test validated the effectiveness of our approach for generating autonomous expressions.
Similar Papers
Awakening Facial Emotional Expressions in Human-Robot
Robotics
Robots learn to make human-like faces.
Inferring Operator Emotions from a Motion-Controlled Robotic Arm
Robotics
Robot movements show how the operator feels.
Human Feedback Driven Dynamic Speech Emotion Recognition
Sound
Makes cartoon characters show real feelings.