Safety-Aware Reinforcement Learning for Control via Risk-Sensitive Action-Value Iteration and Quantile Regression
By: Clinton Enwerem , Aniruddh G. Puranic , John S. Baras and more
Potential Business Impact:
Robot learns to avoid crashing while reaching goals.
Mainstream approximate action-value iteration reinforcement learning (RL) algorithms suffer from overestimation bias, leading to suboptimal policies in high-variance stochastic environments. Quantile-based action-value iteration methods reduce this bias by learning a distribution of the expected cost-to-go using quantile regression. However, ensuring that the learned policy satisfies safety constraints remains a challenge when these constraints are not explicitly integrated into the RL framework. Existing methods often require complex neural architectures or manual tradeoffs due to combined cost functions. To address this, we propose a risk-regularized quantile-based algorithm integrating Conditional Value-at-Risk (CVaR) to enforce safety without complex architectures. We also provide theoretical guarantees on the contraction properties of the risk-sensitive distributional Bellman operator in Wasserstein space, ensuring convergence to a unique cost distribution. Simulations of a mobile robot in a dynamic reach-avoid task show that our approach leads to more goal successes, fewer collisions, and better safety-performance trade-offs compared to risk-neutral methods.
Similar Papers
Risk-Aware Reinforcement Learning with Bandit-Based Adaptation for Quadrupedal Locomotion
Robotics
Robots walk better and safer in new places.
Risk-Aware Safe Reinforcement Learning for Control of Stochastic Linear Systems
Systems and Control
Teaches robots to be safe and smart.
Constraint-Aware Reinforcement Learning via Adaptive Action Scaling
Robotics
Teaches robots to learn safely without breaking things.