Explainable Reinforcement Learning Agents Using World Models
By: Madhuri Singh , Amal Alabdulkarim , Gennie Mansi and more
Potential Business Impact:
Shows why computers made wrong choices.
Explainable AI (XAI) systems have been proposed to help people understand how AI systems produce outputs and behaviors. Explainable Reinforcement Learning (XRL) has an added complexity due to the temporal nature of sequential decision-making. Further, non-AI experts do not necessarily have the ability to alter an agent or its policy. We introduce a technique for using World Models to generate explanations for Model-Based Deep RL agents. World Models predict how the world will change when actions are performed, allowing for the generation of counterfactual trajectories. However, identifying what a user wanted the agent to do is not enough to understand why the agent did something else. We augment Model-Based RL agents with a Reverse World Model, which predicts what the state of the world should have been for the agent to prefer a given counterfactual action. We show that explanations that show users what the world should have been like significantly increase their understanding of the agent policy. We hypothesize that our explanations can help users learn how to control the agents execution through by manipulating the environment.
Similar Papers
Interactive Explanations for Reinforcement-Learning Agents
Artificial Intelligence
Lets you ask robots why they do things.
Model-Agnostic Policy Explanations with Large Language Models
Machine Learning (CS)
Explains robot actions so people understand them.
TalkToAgent: A Human-centric Explanation of Reinforcement Learning Agents with Large Language Models
Artificial Intelligence
Lets you ask computers why they do things.