Chasing Stability: Humanoid Running via Control Lyapunov Function Guided Reinforcement Learning
By: Zachary Olkin , Kejun Li , William D. Compton and more
Potential Business Impact:
Robot learns to run and stay balanced.
Achieving highly dynamic behaviors on humanoid robots, such as running, requires controllers that are both robust and precise, and hence difficult to design. Classical control methods offer valuable insight into how such systems can stabilize themselves, but synthesizing real-time controllers for nonlinear and hybrid dynamics remains challenging. Recently, reinforcement learning (RL) has gained popularity for locomotion control due to its ability to handle these complex dynamics. In this work, we embed ideas from nonlinear control theory, specifically control Lyapunov functions (CLFs), along with optimized dynamic reference trajectories into the reinforcement learning training process to shape the reward. This approach, CLF-RL, eliminates the need to handcraft and tune heuristic reward terms, while simultaneously encouraging certifiable stability and providing meaningful intermediate rewards to guide learning. By grounding policy learning in dynamically feasible trajectories, we expand the robot's dynamic capabilities and enable running that includes both flight and single support phases. The resulting policy operates reliably on a treadmill and in outdoor environments, demonstrating robustness to disturbances applied to the torso and feet. Moreover, it achieves accurate global reference tracking utilizing only on-board sensors, making a critical step toward integrating these dynamic motions into a full autonomy stack.
Similar Papers
CLF-RL: Control Lyapunov Function Guided Reinforcement Learning
Robotics
Helps robots walk better without falling.
Stability Enhancement in Reinforcement Learning via Adaptive Control Lyapunov Function
Machine Learning (CS)
Makes robots learn safely without breaking things.
Lyapunov Stability Learning with Nonlinear Control via Inductive Biases
Machine Learning (CS)
Makes robots safer by learning to avoid mistakes.