On Exact Solutions to the Linear Bellman Equation
By: David Ohlin, Richard Pates, Murat Arcak
Potential Business Impact:
Helps robots learn faster and make better choices.
This paper presents sufficient conditions for optimal control of systems with dynamics given by a linear operator, in order to obtain an explicit solution to the Bellman equation that can be calculated in a distributed fashion. Further, the class of Linearly Solvable MDP is reformulated as a continuous-state optimal control problem. It is shown that this class naturally satisfies the conditions for explicit solution of the Bellman equation, motivating the extension of previous results to semilinear dynamics to account for input nonlinearities. The applicability of the given conditions is illustrated in scenarios with linear and quadratic cost, corresponding to the Stochastic Shortest Path and Linear-Quadratic Regulator problems.
Similar Papers
Is Bellman Equation Enough for Learning Control?
Machine Learning (CS)
Makes smart robots learn the right way.
Linear Dynamics meets Linear MDPs: Closed-Form Optimal Policies via Reinforcement Learning
Optimization and Control
Teaches robots to learn from mistakes.
Optimal Output Feedback Learning Control for Discrete-Time Linear Quadratic Regulation
Systems and Control
Teaches robots to learn how to control things.