Score: 1

Is Bellman Equation Enough for Learning Control?

Published: March 4, 2025 | arXiv ID: 2503.02171v2

By: Haoxiang You, Lekan Molu, Ian Abraham

BigTech Affiliations: Microsoft

Potential Business Impact:

Makes smart robots learn the right way.

Business Areas:
Embedded Systems Hardware, Science and Engineering, Software

The Bellman equation and its continuous-time counterpart, the Hamilton-Jacobi-Bellman (HJB) equation, serve as necessary conditions for optimality in reinforcement learning and optimal control. While the value function is known to be the unique solution to the Bellman equation in tabular settings, we demonstrate that this uniqueness fails to hold in continuous state spaces. Specifically, for linear dynamical systems, we prove the Bellman equation admits at least $\binom{2n}{n}$ solutions, where $n$ is the state dimension. Crucially, only one of these solutions yields both an optimal policy and a stable closed-loop system. We then demonstrate a common failure mode in value-based methods: convergence to unstable solutions due to the exponential imbalance between admissible and inadmissible solutions. Finally, we introduce a positive-definite neural architecture that guarantees convergence to the stable solution by construction to address this issue.

Country of Origin
🇺🇸 United States

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)