Reinforcement Learning-Based Neuroadaptive Control of Robotic Manipulators under Deferred Constraints
By: Hamed Rahimi Nohooji, Abolfazl Zaraki, Holger Voos
Potential Business Impact:
Robots learn to move safely, even when they mess up.
This paper presents a reinforcement learning-based neuroadaptive control framework for robotic manipulators operating under deferred constraints. The proposed approach improves traditional barrier Lyapunov functions by introducing a smooth constraint enforcement mechanism that offers two key advantages: (i) it minimizes control effort in unconstrained regions and progressively increases it near constraints, improving energy efficiency, and (ii) it enables gradual constraint activation through a prescribed-time shifting function, allowing safe operation even when initial conditions violate constraints. To address system uncertainties and improve adaptability, an actor-critic reinforcement learning framework is employed. The critic network estimates the value function, while the actor network learns an optimal control policy in real time, enabling adaptive constraint handling without requiring explicit system modeling. Lyapunov-based stability analysis guarantees the boundedness of all closed-loop signals. The effectiveness of the proposed method is validated through numerical simulations.
Similar Papers
Stability Enhancement in Reinforcement Learning via Adaptive Control Lyapunov Function
Machine Learning (CS)
Makes robots learn safely without breaking things.
Deep Reinforcement Learning-Based Motion Planning and PDE Control for Flexible Manipulators
Robotics
Makes robot arms move smoothly without shaking.
Robustly Constrained Dynamic Games for Uncertain Nonlinear Dynamics
Systems and Control
Robots avoid crashing, even with bad information.