Score: 0

Safe and Optimal Variable Impedance Control via Certified Reinforcement Learning

Published: November 20, 2025 | arXiv ID: 2511.16330v1

By: Shreyas Kumar, Ravi Prakash

Potential Business Impact:

Robots learn to move and touch safely.

Business Areas:
Robotics Hardware, Science and Engineering, Software

Reinforcement learning (RL) offers a powerful approach for robots to learn complex, collaborative skills by combining Dynamic Movement Primitives (DMPs) for motion and Variable Impedance Control (VIC) for compliant interaction. However, this model-free paradigm often risks instability and unsafe exploration due to the time-varying nature of impedance gains. This work introduces Certified Gaussian Manifold Sampling (C-GMS), a novel trajectory-centric RL framework that learns combined DMP and VIC policies while guaranteeing Lyapunov stability and actuator feasibility by construction. Our approach reframes policy exploration as sampling from a mathematically defined manifold of stable gain schedules. This ensures every policy rollout is guaranteed to be stable and physically realizable, thereby eliminating the need for reward penalties or post-hoc validation. Furthermore, we provide a theoretical guarantee that our approach ensures bounded tracking error even in the presence of bounded model errors and deployment-time uncertainties. We demonstrate the effectiveness of C-GMS in simulation and verify its efficacy on a real robot, paving the way for reliable autonomous interaction in complex environments.

Country of Origin
🇮🇳 India

Page Count
8 pages

Category
Computer Science:
Robotics