BC-ADMM: An Efficient Non-convex Constrained Optimizer with Robotic Applications
By: Zherong Pan, Kui Wu
Potential Business Impact:
Robots move faster and smarter with new math.
Non-convex constrained optimizations are ubiquitous in robotic applications such as multi-agent navigation, UAV trajectory optimization, and soft robot simulation. For this problem class, conventional optimizers suffer from small step sizes and slow convergence. We propose BC-ADMM, a variant of Alternating Direction Method of Multiplier (ADMM), that can solve a class of non-convex constrained optimizations with biconvex constraint relaxation. Our algorithm allows larger step sizes by breaking the problem into small-scale sub-problems that can be easily solved in parallel. We show that our method has both theoretical convergence speed guarantees and practical convergence guarantees in the asymptotic sense. Through numerical experiments in a row of four robotic applications, we show that BC-ADMM has faster convergence than conventional gradient descent and Newton's method in terms of wall clock time.
Similar Papers
Constrained Performance Boosting Control for Nonlinear Systems via ADMM
Systems and Control
Makes robots move safely without crashing.
Stochastic momentum ADMM for nonconvex and nonsmooth optimization with application to PnP algorithm
Optimization and Control
Solves hard math problems faster and better.
An Alternating Direction Method of Multipliers for Topology Optimization
Optimization and Control
Designs better shapes for things using math.