Explaining Decentralized Multi-Agent Reinforcement Learning Policies
By: Kayla Boggess, Sarit Kraus, Lu Feng
Potential Business Impact:
Helps people understand how AI teams work together.
Multi-Agent Reinforcement Learning (MARL) has gained significant interest in recent years, enabling sequential decision-making across multiple agents in various domains. However, most existing explanation methods focus on centralized MARL, failing to address the uncertainty and nondeterminism inherent in decentralized settings. We propose methods to generate policy summarizations that capture task ordering and agent cooperation in decentralized MARL policies, along with query-based explanations for When, Why Not, and What types of user queries about specific agent behaviors. We evaluate our approach across four MARL domains and two decentralized MARL algorithms, demonstrating its generalizability and computational efficiency. User studies show that our summarizations and explanations significantly improve user question-answering performance and enhance subjective ratings on metrics such as understanding and satisfaction.
Similar Papers
Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics
Multiagent Systems
Explains how AI fights to build trust.
A Visual Analytics System to Understand Behaviors of Multi Agents in Reinforcement Learning
Human-Computer Interaction
Shows how computer players learn together.
Goal-Oriented Multi-Agent Reinforcement Learning for Decentralized Agent Teams
Multiagent Systems
Helps self-driving vehicles work together better.