Score: 2

Personalised Explanations in Long-term Human-Robot Interactions

Published: July 3, 2025 | arXiv ID: 2507.03049v1

By: Ferran Gebellí , Anaís Garrell , Jan-Gerrit Habekost and more

Potential Business Impact:

Robots explain things better by remembering what you know.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In the field of Human-Robot Interaction (HRI), a fundamental challenge is to facilitate human understanding of robots. The emerging domain of eXplainable HRI (XHRI) investigates methods to generate explanations and evaluate their impact on human-robot interactions. Previous works have highlighted the need to personalise the level of detail of these explanations to enhance usability and comprehension. Our paper presents a framework designed to update and retrieve user knowledge-memory models, allowing for adapting the explanations' level of detail while referencing previously acquired concepts. Three architectures based on our proposed framework that use Large Language Models (LLMs) are evaluated in two distinct scenarios: a hospital patrolling robot and a kitchen assistant robot. Experimental results demonstrate that a two-stage architecture, which first generates an explanation and then personalises it, is the framework architecture that effectively reduces the level of detail only when there is related user knowledge.

Country of Origin
🇪🇸 🇩🇪 Germany, Spain

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Robotics