Trust Through Transparency: Explainable Social Navigation for Autonomous Mobile Robots via Vision-Language Models
By: Oluwadamilola Sotomi, Devika Kodi, Aliasghar Arab
Potential Business Impact:
Robots explain their actions so you trust them.
Service and assistive robots are increasingly being deployed in dynamic social environments; however, ensuring transparent and explainable interactions remains a significant challenge. This paper presents a multimodal explainability module that integrates vision language models and heat maps to improve transparency during navigation. The proposed system enables robots to perceive, analyze, and articulate their observations through natural language summaries. User studies (n=30) showed a preference of majority for real-time explanations, indicating improved trust and understanding. Our experiments were validated through confusion matrix analysis to assess the level of agreement with human expectations. Our experimental and simulation results emphasize the effectiveness of explainability in autonomous navigation, enhancing trust and interpretability.
Similar Papers
Immersive Explainability: Visualizing Robot Navigation Decisions through XAI Semantic Scene Projections in Virtual Reality
Robotics
Shows how robots decide where to go.
Enhancing Explainability with Multimodal Context Representations for Smarter Robots
Human-Computer Interaction
Robots understand what you say and see.
Narrate2Nav: Real-Time Visual Navigation with Implicit Language Reasoning in Human-Centric Environments
Robotics
Robot learns to move by watching and understanding.