Stability-based Generalization Analysis of Randomized Coordinate Descent for Pairwise Learning
By: Liang Wu, Ruixi Hu, Yunwen Lei
Potential Business Impact:
Improves computer learning by finding better ways to compare things.
Pairwise learning includes various machine learning tasks, with ranking and metric learning serving as the primary representatives. While randomized coordinate descent (RCD) is popular in various learning problems, there is much less theoretical analysis on the generalization behavior of models trained by RCD, especially under the pairwise learning framework. In this paper, we consider the generalization of RCD for pairwise learning. We measure the on-average argument stability for both convex and strongly convex objective functions, based on which we develop generalization bounds in expectation. The early-stopping strategy is adopted to quantify the balance between estimation and optimization. Our analysis further incorporates the low-noise setting into the excess risk bound to achieve the optimistic bound as $O(1/n)$, where $n$ is the sample size.
Similar Papers
Stability and Generalization of Adversarial Diffusion Training
Machine Learning (CS)
Makes AI learn better even when tricked.
A stochastic gradient descent algorithm with random search directions
Machine Learning (Stat)
Finds better ways to solve math problems faster.
Near-Optimality of Contrastive Divergence Algorithms
Machine Learning (Stat)
Makes computer learning faster and more accurate.