Optimizing Control-Friendly Trajectories with Self-Supervised Residual Learning
By: Kexin Guo , Zihan Yang , Yuhang Liu and more
Potential Business Impact:
Robots learn to move faster and more accurately.
Real-world physics can only be analytically modeled with a certain level of precision for modern intricate robotic systems. As a result, tracking aggressive trajectories accurately could be challenging due to the existence of residual physics during controller synthesis. This paper presents a self-supervised residual learning and trajectory optimization framework to address the aforementioned challenges. At first, unknown dynamic effects on the closed-loop model are learned and treated as residuals of the nominal dynamics, jointly forming a hybrid model. We show that learning with analytic gradients can be achieved using only trajectory-level data while enjoying accurate long-horizon prediction with an arbitrary integration step size. Subsequently, a trajectory optimizer is developed to compute the optimal reference trajectory with the residual physics along it minimized. It ends up with trajectories that are friendly to the following control level. The agile flight of quadrotors illustrates that by utilizing the hybrid dynamics, the proposed optimizer outputs aggressive motions that can be precisely tracked.
Similar Papers
Dual-quaternion learning control for autonomous vehicle trajectory tracking with safety guarantees
Robotics
Helps robots move smoothly despite bumps.
Real-Time Generation of Near-Minimum-Energy Trajectories via Constraint-Informed Residual Learning
Robotics
Robots move using less power, much faster.
Towards Bio-Inspired Robotic Trajectory Planning via Self-Supervised RNN
Robotics
Teaches robot arms to move to new spots.