STACHE: Local Black-Box Explanations for Reinforcement Learning Policies
By: Andrew Elashkin, Orna Grumberg
Reinforcement learning agents often behave unexpectedly in sparse-reward or safety-critical environments, creating a strong need for reliable debugging and verification tools. In this paper, we propose STACHE, a comprehensive framework for generating local, black-box explanations for an agent's specific action within discrete Markov games. Our method produces a Composite Explanation consisting of two complementary components: (1) a Robustness Region, the connected neighborhood of states where the agent's action remains invariant, and (2) Minimal Counterfactuals, the smallest state perturbations required to alter that decision. By exploiting the structure of factored state spaces, we introduce an exact, search-based algorithm that circumvents the fidelity gaps of surrogate models. Empirical validation on Gymnasium environments demonstrates that our framework not only explains policy actions, but also effectively captures the evolution of policy logic during training - from erratic, unstable behavior to optimized, robust strategies - providing actionable insights into agent sensitivity and decision boundaries.
Similar Papers
MAGIC-MASK: Multi-Agent Guided Inter-Agent Collaboration with Mask-Based Explainability for Reinforcement Learning
Artificial Intelligence
Helps AI teams learn faster and work together.
Reinforcement Learning with Action-Triggered Observations
Machine Learning (CS)
Teaches robots to learn even with missing clues.
SEBA: Sample-Efficient Black-Box Attacks on Visual Reinforcement Learning
Machine Learning (CS)
Tricks robots into making bad choices.