Model Reconciliation through Explainability and Collaborative Recovery in Assistive Robotics
By: Britt Besch , Tai Mai , Jeremias Thun and more
Potential Business Impact:
Explains robot actions so people understand.
Whenever humans and robots work together, it is essential that unexpected robot behavior can be explained to the user. Especially in applications such as shared control the user and the robot must share the same model of the objects in the world, and the actions that can be performed on these objects. In this paper, we achieve this with a so-called model reconciliation framework. We leverage a Large Language Model to predict and explain the difference between the robot's and the human's mental models, without the need of a formal mental model of the user. Furthermore, our framework aims to solve the model divergence after the explanation by allowing the human to correct the robot. We provide an implementation in an assistive robotics domain, where we conduct a set of experiments with a real wheelchair-based mobile manipulator and its digital twin.
Similar Papers
Bi-Directional Mental Model Reconciliation for Human-Robot Interaction with Large Language Models
Robotics
Helps people and robots understand each other better.
Trust Through Transparency: Explainable Social Navigation for Autonomous Mobile Robots via Vision-Language Models
Robotics
Robots explain their actions so you trust them.
Towards Cognitive Collaborative Robots: Semantic-Level Integration and Explainable Control for Human-Centric Cooperation
Robotics
Robots learn to work safely and understand people.