Efficient Distributed Learning over Decentralized Networks with Convoluted Support Vector Machine
By: Canyi Chen, Nan Qiao, Liping Zhu
Potential Business Impact:
Teaches computers to learn from data faster.
This paper addresses the problem of efficiently classifying high-dimensional data over decentralized networks. Penalized support vector machines (SVMs) are widely used for high-dimensional classification tasks. However, the double nonsmoothness of the objective function poses significant challenges in developing efficient decentralized learning methods. Many existing procedures suffer from slow, sublinear convergence rates. To overcome this limitation, we consider a convolution-based smoothing technique for the nonsmooth hinge loss function. The resulting loss function remains convex and smooth. We then develop an efficient generalized alternating direction method of multipliers (ADMM) algorithm for solving penalized SVM over decentralized networks. Our theoretical contributions are twofold. First, we establish that our generalized ADMM algorithm achieves provable linear convergence with a simple implementation. Second, after a sufficient number of ADMM iterations, the final sparse estimator attains near-optimal statistical convergence and accurately recovers the true support of the underlying parameters. Extensive numerical experiments on both simulated and real-world datasets validate our theoretical findings.
Similar Papers
Smoothing ADMM for Non-convex and Non-smooth Hierarchical Federated Learning
Machine Learning (CS)
Trains AI smarter and faster with different data.
Parallel Algorithms for Combined Regularized Support Vector Machines: Application in Music Genre Classification
Machine Learning (CS)
Helps computers learn from huge amounts of data.
Modular Distributed Nonconvex Learning with Error Feedback
Optimization and Control
Makes computers learn faster with less data.