An Optimal Policy for Learning Controllable Dynamics by Exploration
By: Peter N. Loxley
Controllable Markov chains describe the dynamics of sequential decision making tasks and are the central component in optimal control and reinforcement learning. In this work, we give the general form of an optimal policy for learning controllable dynamics in an unknown environment by exploring over a limited time horizon. This policy is simple to implement and efficient to compute, and allows an agent to ``learn by exploring" as it maximizes its information gain in a greedy fashion by selecting controls from a constraint set that changes over time during exploration. We give a simple parameterization for the set of controls, and present an algorithm for finding an optimal policy. The reason for this policy is due to the existence of certain types of states that restrict control of the dynamics; such as transient states, absorbing states, and non-backtracking states. We show why the occurrence of these states makes a non-stationary policy essential for achieving optimal exploration. Six interesting examples of controllable dynamics are treated in detail. Policy optimality is demonstrated using counting arguments, comparing with suboptimal policies, and by making use of a sequential improvement property from dynamic programming.
Similar Papers
Linear Dynamics meets Linear MDPs: Closed-Form Optimal Policies via Reinforcement Learning
Optimization and Control
Teaches robots to learn from mistakes.
Learning Dynamics from Infrequent Output Measurements for Uncertainty-Aware Optimal Control
Systems and Control
Controls machines even with bad, slow information.
Research Program: Theory of Learning in Dynamical Systems
Machine Learning (CS)
Helps computers learn from changing information.