Theory of Mind for Explainable Human-Robot Interaction
By: Marie Bauer , Julia Gachot , Matthias Kerzel and more
Within the context of human-robot interaction (HRI), Theory of Mind (ToM) is intended to serve as a user-friendly backend to the interface of robotic systems, enabling robots to infer and respond to human mental states. When integrated into robots, ToM allows them to adapt their internal models to users' behaviors, enhancing the interpretability and predictability of their actions. Similarly, Explainable Artificial Intelligence (XAI) aims to make AI systems transparent and interpretable, allowing humans to understand and interact with them effectively. Since ToM in HRI serves related purposes, we propose to consider ToM as a form of XAI and evaluate it through the eValuation XAI (VXAI) framework and its seven desiderata. This paper identifies a critical gap in the application of ToM within HRI, as existing methods rarely assess the extent to which explanations correspond to the robot's actual internal reasoning. To address this limitation, we propose to integrate ToM within XAI frameworks. By embedding ToM principles inside XAI, we argue for a shift in perspective, as current XAI research focuses predominantly on the AI system itself and often lacks user-centered explanations. Incorporating ToM would enable a change in focus, prioritizing the user's informational needs and perspective.
Similar Papers
MindPower: Enabling Theory-of-Mind Reasoning in VLM-based Embodied Agents
Artificial Intelligence
Robots understand what people think and do.
Towards properly implementing Theory of Mind in AI systems: An account of four misconceptions
Human-Computer Interaction
Teaches computers to understand people's thoughts.
Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
Artificial Intelligence
Lets robots guess friends' goals to team up better