Constraint-Aware Reinforcement Learning via Adaptive Action Scaling
By: Murad Dawood , Usama Ahmed Siddiquie , Shahram Khorshidi and more
Potential Business Impact:
Teaches robots to learn safely without breaking things.
Safe reinforcement learning (RL) seeks to mitigate unsafe behaviors that arise from exploration during training by reducing constraint violations while maintaining task performance. Existing approaches typically rely on a single policy to jointly optimize reward and safety, which can cause instability due to conflicting objectives, or they use external safety filters that override actions and require prior system knowledge. In this paper, we propose a modular cost-aware regulator that scales the agent's actions based on predicted constraint violations, preserving exploration through smooth action modulation rather than overriding the policy. The regulator is trained to minimize constraint violations while avoiding degenerate suppression of actions. Our approach integrates seamlessly with off-policy RL methods such as SAC and TD3, and achieves state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs, reducing constraint violations by up to 126 times while increasing returns by over an order of magnitude compared to prior methods.
Similar Papers
MPC-Guided Safe Reinforcement Learning and Lipschitz-Based Filtering for Structured Nonlinear Systems
Robotics
Makes robots and cars safer and smarter.
Enhance the Safety in Reinforcement Learning by ADRC Lagrangian Methods
Machine Learning (CS)
Keeps robots safe while they learn new tasks.
Vision-based Goal-Reaching Control for Mobile Robots Using a Hierarchical Learning Framework
Robotics
Keeps big robots safe while they learn.