Adapting Robot's Explanation for Failures Based on Observed Human Behavior in Human-Robot Collaboration
By: Andreas Naoum , Parag Khanna , Elmira Yadollahi and more
Potential Business Impact:
Robots learn to explain mistakes better to people.
This work aims to interpret human behavior to anticipate potential user confusion when a robot provides explanations for failure, allowing the robot to adapt its explanations for more natural and efficient collaboration. Using a dataset that included facial emotion detection, eye gaze estimation, and gestures from 55 participants in a user study, we analyzed how human behavior changed in response to different types of failures and varying explanation levels. Our goal is to assess whether human collaborators are ready to accept less detailed explanations without inducing confusion. We formulate a data-driven predictor to predict human confusion during robot failure explanations. We also propose and evaluate a mechanism, based on the predictor, to adapt the explanation level according to observed human behavior. The promising results from this evaluation indicate the potential of this research in adapting a robot's explanations for failures to enhance the collaborative experience.
Similar Papers
Training Models to Detect Successive Robot Errors from Human Reactions
Robotics
Teaches robots to see when humans get upset.
Real-Time Detection of Robot Failures Using Gaze Dynamics in Collaborative Tasks
Human-Computer Interaction
Watches your eyes to spot robot mistakes.
Expectations, Explanations, and Embodiment: Attempts at Robot Failure Recovery
Robotics
Explains robot mistakes to make people trust them.