Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality
By: Jorge de Heuvel , Sebastian Müller , Marlene Wessels and more
Potential Business Impact:
Shows how robots decide where to go.
End-to-end robot policies achieve high performance through neural networks trained via reinforcement learning (RL). Yet, their black box nature and abstract reasoning pose challenges for human-robot interaction (HRI), because humans may experience difficulty in understanding and predicting the robot's navigation decisions, hindering trust development. We present a virtual reality (VR) interface that visualizes explainable AI (XAI) outputs and the robot's lidar perception to support intuitive interpretation of RL-based navigation behavior. By visually highlighting objects based on their attribution scores, the interface grounds abstract policy explanations in the scene context. This XAI visualization bridges the gap between obscure numerical XAI attribution scores and a human-centric semantic level of explanation. A within-subjects study with 24 participants evaluated the effectiveness of our interface for four visualization conditions combining XAI and lidar. Participants ranked scene objects across navigation scenarios based on their importance to the robot, followed by a questionnaire assessing subjective understanding and predictability. Results show that semantic projection of attributions significantly enhances non-expert users' objective understanding and subjective awareness of robot behavior. In addition, lidar visualization further improves perceived predictability, underscoring the value of integrating XAI and sensor for transparent, trustworthy HRI.
Similar Papers
Trust Through Transparency: Explainable Social Navigation for Autonomous Mobile Robots via Vision-Language Models
Robotics
Robots explain their actions so you trust them.
Towards Balancing Preference and Performance through Adaptive Personalized Explainability
Human-Computer Interaction
Helps robots explain their choices to people.
XAI Evaluation Framework for Semantic Segmentation
CV and Pattern Recognition
Helps AI understand pictures better, making it trustworthy.