Textual Explanations and Their Evaluations for Reinforcement Learning Policy
By: Ahmad Terra , Mohit Ahmed , Rafia Inam and more
Potential Business Impact:
Makes AI explain its decisions like rules.
Understanding a Reinforcement Learning (RL) policy is crucial for ensuring that autonomous agents behave according to human expectations. This goal can be achieved using Explainable Reinforcement Learning (XRL) techniques. Although textual explanations are easily understood by humans, ensuring their correctness remains a challenge, and evaluations in state-of-the-art remain limited. We present a novel XRL framework for generating textual explanations, converting them into a set of transparent rules, improving their quality, and evaluating them. Expert's knowledge can be incorporated into this framework, and an automatic predicate generator is also proposed to determine the semantic information of a state. Textual explanations are generated using a Large Language Model (LLM) and a clustering technique to identify frequent conditions. These conditions are then converted into rules to evaluate their properties, fidelity, and performance in the deployed environment. Two refinement techniques are proposed to improve the quality of explanations and reduce conflicting information. Experiments were conducted in three open-source environments to enable reproducibility, and in a telecom use case to evaluate the industrial applicability of the proposed XRL framework. This framework addresses the limitations of an existing method, Autonomous Policy Explanation, and the generated transparent rules can achieve satisfactory performance on certain tasks. This framework also enables a systematic and quantitative evaluation of textual explanations, providing valuable insights for the XRL field.
Similar Papers
Interactive Explanations for Reinforcement-Learning Agents
Artificial Intelligence
Lets you ask robots why they do things.
A Survey on Explainable Deep Reinforcement Learning
Machine Learning (CS)
Makes AI decisions understandable and trustworthy.
Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study
Computation and Language
Computers can now explain their answers without humans.