Score: 0

Model-Agnostic Policy Explanations with Large Language Models

Published: April 8, 2025 | arXiv ID: 2504.05625v2

By: Zhang Xi-Jia , Yue Guo , Shufei Chen and more

Potential Business Impact:

Explains robot actions so people understand them.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.

Country of Origin
🇺🇸 United States

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)