Score: 0

Generation of Real-time Robotic Emotional Expressions Learning from Human Demonstration in Mixed Reality

Published: August 12, 2025 | arXiv ID: 2508.08999v1

By: Chao Wang, Michael Gienger, Fan Zhang

Potential Business Impact:

Robots show feelings like humans do.

Expressive behaviors in robots are critical for effectively conveying their emotional states during interactions with humans. In this work, we present a framework that autonomously generates realistic and diverse robotic emotional expressions based on expert human demonstrations captured in Mixed Reality (MR). Our system enables experts to teleoperate a virtual robot from a first-person perspective, capturing their facial expressions, head movements, and upper-body gestures, and mapping these behaviors onto corresponding robotic components including eyes, ears, neck, and arms. Leveraging a flow-matching-based generative process, our model learns to produce coherent and varied behaviors in real-time in response to moving objects, conditioned explicitly on given emotional states. A preliminary test validated the effectiveness of our approach for generating autonomous expressions.

Page Count
4 pages

Category
Computer Science:
Robotics