"Why the face?": Exploring Robot Error Detection Using Instrumented Bystander Reactions
By: Maria Teresa Parreira , Ruidong Zhang , Sukruth Gowdru Lingaraju and more
Potential Business Impact:
Helps robots learn from people's facial reactions.
How do humans recognize and rectify social missteps? We achieve social competence by looking around at our peers, decoding subtle cues from bystanders - a raised eyebrow, a laugh - to evaluate the environment and our actions. Robots, however, struggle to perceive and make use of these nuanced reactions. By employing a novel neck-mounted device that records facial expressions from the chin region, we explore the potential of previously untapped data to capture and interpret human responses to robot error. First, we develop NeckNet-18, a 3D facial reconstruction model to map the reactions captured through the chin camera onto facial points and head motion. We then use these facial responses to develop a robot error detection model which outperforms standard methodologies such as using OpenFace or video data, generalizing well especially for within-participant data. Through this work, we argue for expanding human-in-the-loop robot sensing, fostering more seamless integration of robots into diverse human environments, pushing the boundaries of social cue detection and opening new avenues for adaptable robotics.
Similar Papers
Training Models to Detect Successive Robot Errors from Human Reactions
Robotics
Teaches robots to see when humans get upset.
Awakening Facial Emotional Expressions in Human-Robot
Robotics
Robots learn to make human-like faces.
Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration
Robotics
Robots learn to explain mistakes better to people.