Alternating Direction Method of Multipliers for Nonlinear Matrix Decompositions
By: Atharva Awari, Nicolas Gillis, Arnaud Vandaele
We present an algorithm based on the alternating direction method of multipliers (ADMM) for solving nonlinear matrix decompositions (NMD). Given an input matrix $X \in \mathbb{R}^{m \times n}$ and a factorization rank $r \ll \min(m, n)$, NMD seeks matrices $W \in \mathbb{R}^{m \times r}$ and $H \in \mathbb{R}^{r \times n}$ such that $X \approx f(WH)$, where $f$ is an element-wise nonlinear function. We evaluate our method on several representative nonlinear models: the rectified linear unit activation $f(x) = \max(0, x)$, suitable for nonnegative sparse data approximation, the component-wise square $f(x) = x^2$, applicable to probabilistic circuit representation, and the MinMax transform $f(x) = \min(b, \max(a, x))$, relevant for recommender systems. The proposed framework flexibly supports diverse loss functions, including least squares, $\ell_1$ norm, and the Kullback-Leibler divergence, and can be readily extended to other nonlinearities and metrics. We illustrate the applicability, efficiency, and adaptability of the approach on real-world datasets, highlighting its potential for a broad range of applications.
Similar Papers
An Alternating Direction Method of Multipliers for Topology Optimization
Optimization and Control
Designs better shapes for things using math.
An Efficient Alternating Algorithm for ReLU-based Symmetric Matrix Decomposition
Machine Learning (CS)
Finds hidden patterns in data faster.
A Linearized Alternating Direction Multiplier Method for Federated Matrix Completion Problems
Machine Learning (CS)
Helps apps guess what you like without seeing your data.