Distributed Stochastic Proximal Algorithm on Riemannian Submanifolds for Weakly-convex Functions
By: Jishu Zhao , Xi Wang , Jinlong Lei and more
Potential Business Impact:
Helps robots learn to work together better.
This paper aims to investigate the distributed stochastic optimization problems on compact embedded submanifolds (in the Euclidean space) for multi-agent network systems. To address the manifold structure, we propose a distributed Riemannian stochastic proximal algorithm framework by utilizing the retraction and Riemannian consensus protocol, and analyze three specific algorithms: the distributed Riemannian stochastic subgradient, proximal point, and prox-linear algorithms. When the local costs are weakly-convex and the initial points satisfy certain conditions, we show that the iterates generated by this framework converge to a nearly stationary point in expectation while achieving consensus. We further establish the convergence rate of the algorithm framework as $\mathcal{O}(\frac{1+\kappa_g}{\sqrt{k}})$ where $k$ denotes the number of iterations and $\kappa_g$ shows the impact of manifold geometry on the algorithm performance. Finally, numerical experiments are implemented to demonstrate the theoretical results and show the empirical performance.
Similar Papers
Decentralized Online Riemannian Optimization Beyond Hadamard Manifolds
Optimization and Control
Makes smart machines learn better on curved paths.
Efficient Optimization with Orthogonality Constraint: a Randomized Riemannian Submanifold Method
Optimization and Control
Makes big computer math problems solve faster.
Mean-square and linear convergence of a stochastic proximal point algorithm in metric spaces of nonpositive curvature
Optimization and Control
Helps computers find answers faster in complex math.