Score: 1

Constraint-Aware Reinforcement Learning via Adaptive Action Scaling

Published: October 13, 2025 | arXiv ID: 2510.11491v1

By: Murad Dawood , Usama Ahmed Siddiquie , Shahram Khorshidi and more

Potential Business Impact:

Teaches robots to learn safely without breaking things.

Business Areas:
Industrial Automation Manufacturing, Science and Engineering

Safe reinforcement learning (RL) seeks to mitigate unsafe behaviors that arise from exploration during training by reducing constraint violations while maintaining task performance. Existing approaches typically rely on a single policy to jointly optimize reward and safety, which can cause instability due to conflicting objectives, or they use external safety filters that override actions and require prior system knowledge. In this paper, we propose a modular cost-aware regulator that scales the agent's actions based on predicted constraint violations, preserving exploration through smooth action modulation rather than overriding the policy. The regulator is trained to minimize constraint violations while avoiding degenerate suppression of actions. Our approach integrates seamlessly with off-policy RL methods such as SAC and TD3, and achieves state-of-the-art return-to-cost ratios on Safety Gym locomotion tasks with sparse costs, reducing constraint violations by up to 126 times while increasing returns by over an order of magnitude compared to prior methods.

Page Count
9 pages

Category
Computer Science:
Robotics