HEXAR: a Hierarchical Explainability Architecture for Robots
By: Tamlin Love , Ferran Gebellí , Pradip Pramanick and more
Potential Business Impact:
Helps robots explain why they do things.
As robotic systems become increasingly complex, the need for explainable decision-making becomes critical. Existing explainability approaches in robotics typically either focus on individual modules, which can be difficult to query from the perspective of high-level behaviour, or employ monolithic approaches, which do not exploit the modularity of robotic architectures. We present HEXAR (Hierarchical EXplainability Architecture for Robots), a novel framework that provides a plug-in, hierarchical approach to generate explanations about robotic systems. HEXAR consists of specialised component explainers using diverse explanation techniques (e.g., LLM-based reasoning, causal models, feature importance, etc) tailored to specific robot modules, orchestrated by an explainer selector that chooses the most appropriate one for a given query. We implement and evaluate HEXAR on a TIAGo robot performing assistive tasks in a home environment, comparing it against end-to-end and aggregated baseline approaches across 180 scenario-query variations. We observe that HEXAR significantly outperforms baselines in root cause identification, incorrect information exclusion, and runtime, offering a promising direction for transparent autonomous systems.
Similar Papers
Accessible and Pedagogically-Grounded Explainability for Human-Robot Interaction: A Framework Based on UDL and Symbolic Interfaces
Robotics
Helps robots explain themselves to everyone.
Personalised Explanations in Long-term Human-Robot Interactions
Robotics
Robots explain things better by remembering what you know.
Trust Through Transparency: Explainable Social Navigation for Autonomous Mobile Robots via Vision-Language Models
Robotics
Robots explain their actions so you trust them.