Sharpness-aware Second-order Latent Factor Model for High-dimensional and Incomplete Data
By: Jialiang Wang, Xueyan Bao, Hao Wu
Second-order Latent Factor (SLF) model, a class of low-rank representation learning methods, has proven effective at extracting node-to-node interaction patterns from High-dimensional and Incomplete (HDI) data. However, its optimization is notoriously difficult due to its bilinear and non-convex nature. Sharpness-aware Minimization (SAM) has recently proposed to find flat local minima when minimizing non-convex objectives, thereby improving the generalization of representation-learning models. To address this challenge, we propose a Sharpness-aware SLF (SSLF) model. SSLF embodies two key ideas: (1) acquiring second-order information via Hessian-vector products; and (2) injecting a sharpness term into the curvature (Hessian) through the designed Hessian-vector products. Experiments on multiple industrial datasets demonstrate that the proposed model consistently outperforms state-of-the-art baselines.
Similar Papers
DRSLF: Double Regularized Second-Order Low-Rank Representation for Web Service QoS Prediction
Machine Learning (CS)
Makes cloud services pick the best options.
A Proportional-Integral Controller-Incorporated SGD Algorithm for High Efficient Latent Factor Analysis
Machine Learning (CS)
Learns faster from big data by remembering past lessons.
Zeroth-Order Sharpness-Aware Learning with Exponential Tilting
Machine Learning (CS)
Makes computer learning better without needing exact math.