Score: 0

StablePCA: Learning Shared Representations across Multiple Sources via Minimax Optimization

Published: May 2, 2025 | arXiv ID: 2505.00940v1

By: Zhenyu Wang , Molei Liu , Jing Lei and more

Potential Business Impact:

Finds hidden patterns in mixed data sources.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

When synthesizing multisource high-dimensional data, a key objective is to extract low-dimensional feature representations that effectively approximate the original features across different sources. Such general feature extraction facilitates the discovery of transferable knowledge, mitigates systematic biases such as batch effects, and promotes fairness. In this paper, we propose Stable Principal Component Analysis (StablePCA), a novel method for group distributionally robust learning of latent representations from high-dimensional multi-source data. A primary challenge in generalizing PCA to the multi-source regime lies in the nonconvexity of the fixed rank constraint, rendering the minimax optimization nonconvex. To address this challenge, we employ the Fantope relaxation, reformulating the problem as a convex minimax optimization, with the objective defined as the maximum loss across sources. To solve the relaxed formulation, we devise an optimistic-gradient Mirror Prox algorithm with explicit closed-form updates. Theoretically, we establish the global convergence of the Mirror Prox algorithm, with the convergence rate provided from the optimization perspective. Furthermore, we offer practical criteria to assess how closely the solution approximates the original nonconvex formulation. Through extensive numerical experiments, we demonstrate StablePCA's high accuracy and efficiency in extracting robust low-dimensional representations across various finite-sample scenarios.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)