Is Bellman Equation Enough for Learning Control?
By: Haoxiang You, Lekan Molu, Ian Abraham
Potential Business Impact:
Makes smart robots learn the right way.
The Bellman equation and its continuous-time counterpart, the Hamilton-Jacobi-Bellman (HJB) equation, serve as necessary conditions for optimality in reinforcement learning and optimal control. While the value function is known to be the unique solution to the Bellman equation in tabular settings, we demonstrate that this uniqueness fails to hold in continuous state spaces. Specifically, for linear dynamical systems, we prove the Bellman equation admits at least $\binom{2n}{n}$ solutions, where $n$ is the state dimension. Crucially, only one of these solutions yields both an optimal policy and a stable closed-loop system. We then demonstrate a common failure mode in value-based methods: convergence to unstable solutions due to the exponential imbalance between admissible and inadmissible solutions. Finally, we introduce a positive-definite neural architecture that guarantees convergence to the stable solution by construction to address this issue.
Similar Papers
On Exact Solutions to the Linear Bellman Equation
Optimization and Control
Helps robots learn faster and make better choices.
Tractable Representations for Convergent Approximation of Distributional HJB Equations
Machine Learning (CS)
Helps computers learn better in real-time.
Ensemble based Closed-Loop Optimal Control using Physics-Informed Neural Networks
Machine Learning (CS)
Teaches computers to control machines perfectly.