Hidden Convexity of Fair PCA and Fast Solver via Eigenvalue Optimization
By: Junhui Shen , Aaron J. Davis , Ding Lu and more
Potential Business Impact:
Makes computer learning fairer and faster.
Principal Component Analysis (PCA) is a foundational technique in machine learning for dimensionality reduction of high-dimensional datasets. However, PCA could lead to biased outcomes that disadvantage certain subgroups of the underlying datasets. To address the bias issue, a Fair PCA (FPCA) model was introduced by Samadi et al. (2018) for equalizing the reconstruction loss between subgroups. The semidefinite relaxation (SDR) based approach proposed by Samadi et al. (2018) is computationally expensive even for suboptimal solutions. To improve efficiency, several alternative variants of the FPCA model have been developed. These variants often shift the focus away from equalizing the reconstruction loss. In this paper, we identify a hidden convexity in the FPCA model and introduce an algorithm for convex optimization via eigenvalue optimization. Our approach achieves the desired fairness in reconstruction loss without sacrificing performance. As demonstrated in real-world datasets, the proposed FPCA algorithm runs $8\times$ faster than the SDR-based algorithm, and only at most 85% slower than the standard PCA.
Similar Papers
Accelerating Spectral Clustering under Fairness Constraints
Machine Learning (CS)
Makes computer groups fair and faster.
StablePCA: Learning Shared Representations across Multiple Sources via Minimax Optimization
Machine Learning (CS)
Finds hidden patterns in mixed data sources.
Localized Functional Principal Component Analysis Based on Covariance Structure
Methodology
Finds patterns in data that only exist in parts.