Deontically Constrained Policy Improvement in Reinforcement Learning Agents
By: Alena Makarova, Houssam Abbas
Potential Business Impact:
Teaches robots to do good things, not bad.
Markov Decision Processes (MDPs) are the most common model for decision making under uncertainty in the Machine Learning community. An MDP captures non-determinism, probabilistic uncertainty, and an explicit model of action. A Reinforcement Learning (RL) agent learns to act in an MDP by maximizing a utility function. This paper considers the problem of learning a decision policy that maximizes utility subject to satisfying a constraint expressed in deontic logic. In this setup, the utility captures the agent's mission - such as going quickly from A to B. The deontic formula represents (ethical, social, situational) constraints on how the agent might achieve its mission by prohibiting classes of behaviors. We use the logic of Expected Act Utilitarianism, a probabilistic stit logic that can be interpreted over controlled MDPs. We develop a variation on policy improvement, and show that it reaches a constrained local maximum of the mission utility. Given that in stit logic, an agent's duty is derived from value maximization, this can be seen as a way of acting to simultaneously maximize two value functions, one of which is implicit, in a bi-level structure. We illustrate these results with experiments on sample MDPs.
Similar Papers
Model-Based Reinforcement Learning in Discrete-Action Non-Markovian Reward Decision Processes
Machine Learning (CS)
Teaches computers to learn from past events.
Generalization in Monitored Markov Decision Processes (Mon-MDPs)
Artificial Intelligence
Teaches robots to learn from hidden rewards.
Efficient Action-Constrained Reinforcement Learning via Acceptance-Rejection Method and Augmented MDPs
Machine Learning (CS)
Teaches robots to act safely and efficiently.