Model-Agnostic Policy Explanations with Large Language Models
By: Zhang Xi-Jia , Yue Guo , Shufei Chen and more
Potential Business Impact:
Explains robot actions so people understand them.
Intelligent agents, such as robots, are increasingly deployed in real-world, human-centric environments. To foster appropriate human trust and meet legal and ethical standards, these agents must be able to explain their behavior. However, state-of-the-art agents are typically driven by black-box models like deep neural networks, limiting their interpretability. We propose a method for generating natural language explanations of agent behavior based only on observed states and actions -- without access to the agent's underlying model. Our approach learns a locally interpretable surrogate model of the agent's behavior from observations, which then guides a large language model to generate plausible explanations with minimal hallucination. Empirical results show that our method produces explanations that are more comprehensible and correct than those from baselines, as judged by both language models and human evaluators. Furthermore, we find that participants in a user study more accurately predicted the agent's future actions when given our explanations, suggesting improved understanding of agent behavior.
Similar Papers
Reasoning-Grounded Natural Language Explanations for Language Models
Machine Learning (CS)
Shows how computers *think* to give better answers.
Utilizing Large Language Models for Machine Learning Explainability
Machine Learning (CS)
AI builds smart computer programs that explain themselves.
Because we have LLMs, we Can and Should Pursue Agentic Interpretability
Artificial Intelligence
Helps people understand smart computer brains.