Score: 0

Explainable Reinforcement Learning Agents Using World Models

Published: May 12, 2025 | arXiv ID: 2505.08073v2

By: Madhuri Singh , Amal Alabdulkarim , Gennie Mansi and more

Potential Business Impact:

Shows why computers made wrong choices.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Explainable AI (XAI) systems have been proposed to help people understand how AI systems produce outputs and behaviors. Explainable Reinforcement Learning (XRL) has an added complexity due to the temporal nature of sequential decision-making. Further, non-AI experts do not necessarily have the ability to alter an agent or its policy. We introduce a technique for using World Models to generate explanations for Model-Based Deep RL agents. World Models predict how the world will change when actions are performed, allowing for the generation of counterfactual trajectories. However, identifying what a user wanted the agent to do is not enough to understand why the agent did something else. We augment Model-Based RL agents with a Reverse World Model, which predicts what the state of the world should have been for the agent to prefer a given counterfactual action. We show that explanations that show users what the world should have been like significantly increase their understanding of the agent policy. We hypothesize that our explanations can help users learn how to control the agents execution through by manipulating the environment.

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence