Unifying Entropy Regularization in Optimal Control: From and Back to Classical Objectives via Iterated Soft Policies and Path Integral Solutions
By: Ajinkya Bhole , Mohammad Mahmoudi Filabadi , Guillaume Crevecoeur and more
Potential Business Impact:
Makes robots learn faster and smarter.
This paper develops a unified perspective on several stochastic optimal control formulations through the lens of Kullback-Leibler regularization. We propose a central problem that separates the KL penalties on policies and transitions, assigning them independent weights, thereby generalizing the standard trajectory-level KL-regularization commonly used in probabilistic and KL-regularized control. This generalized formulation acts as a generative structure allowing to recover various control problems. These include the classical Stochastic Optimal Control (SOC), Risk-Sensitive Optimal Control (RSOC), and their policy-based KL-regularized counterparts. The latter we refer to as soft-policy SOC and RSOC, facilitating alternative problems with tractable solutions. Beyond serving as regularized variants, we show that these soft-policy formulations majorize the original SOC and RSOC problem. This means that the regularized solution can be iterated to retrieve the original solution. Furthermore, we identify a structurally synchronized case of the risk-seeking soft-policy RSOC formulation, wherein the policy and transition KL-regularization weights coincide. Remarkably, this specific setting gives rise to several powerful properties such as a linear Bellman equation, path integral solution, and, compositionality, thereby extending these computationally favourable properties to a broad class of control problems.
Similar Papers
Advancing Frontiers of Path Integral Theory for Stochastic Optimal Control
Optimization and Control
Lets robots learn and act in tricky situations.
Beyond KL-divergence: Risk Aware Control Through Cross Entropy and Adversarial Entropy Regularization
Systems and Control
Makes smart robots handle unexpected problems better.
Global Convergence of Policy Gradient for Entropy Regularized Linear-Quadratic Control with multiplicative noise
Systems and Control
Teaches computers to learn and make good choices.