A non-smooth regularization framework for learning over multitask graphs
By: Yara Zgheib , Luca Calatroni , Marc Antonini and more
Potential Business Impact:
Helps computers learn tasks better together.
In this work, we consider learning over multitask graphs, where each agent aims to estimate its own parameter vector. Although agents seek distinct objectives, collaboration among them can be beneficial in scenarios where relationships between tasks exist. Among the various approaches to promoting relationships between tasks and, consequently, enhancing collaboration between agents, one notable method is regularization. While previous multitask learning studies have focused on smooth regularization to enforce graph smoothness, this work explores non-smooth regularization techniques that promote sparsity, making them particularly effective in encouraging piecewise constant transitions on the graph. We begin by formulating a global regularized optimization problem, which involves minimizing the aggregate sum of individual costs, regularized by a general non-smooth term designed to promote piecewise-constant relationships between the tasks of neighboring agents. Based on the forward-backward splitting strategy, we propose a decentralized learning approach that enables efficient solutions to the regularized optimization problem. Then, under convexity assumptions on the cost functions and co-regularization, we establish that the proposed approach converges in the mean-square-error sense within $O(\mu)$ of the optimal solution of the globally regularized cost. For broader applicability and improved computational efficiency, we also derive closed-form expressions for commonly used non-smooth (and, possibly, non-convex) regularizers, such as the weighted sum of the $\ell_0$-norm, $\ell_1$-norm, and elastic net regularization. Finally, we illustrate both the theoretical findings and the effectiveness of the approach through simulations.
Similar Papers
A Graphical Global Optimization Framework for Parameter Estimation of Statistical Models with Nonconvex Regularization Functions
Optimization and Control
Solves hard math puzzles for computers faster.
Elliptic Loss Regularization
Machine Learning (CS)
Makes AI predict better with new or missing information.
Scalable Hypergraph Structure Learning with Diverse Smoothness Priors
Machine Learning (CS)
Finds hidden connections in complex networks.