Diffusion-based Decentralized Federated Multi-Task Representation Learning
By: Donghwa Kang, Shana Moothedath
Representation learning is a widely adopted framework for learning in data-scarce environments to obtain a feature extractor or representation from various different yet related tasks. Despite extensive research on representation learning, decentralized approaches remain relatively underexplored. This work develops a decentralized projected gradient descent-based algorithm for multi-task representation learning. We focus on the problem of multi-task linear regression in which multiple linear regression models share a common, low-dimensional linear representation. We present an alternating projected gradient descent and minimization algorithm for recovering a low-rank feature matrix in a diffusion-based decentralized and federated fashion. We obtain constructive, provable guarantees that provide a lower bound on the required sample complexity and an upper bound on the iteration complexity of our proposed algorithm. We analyze the time and communication complexity of our algorithm and show that it is fast and communication-efficient. We performed numerical simulations to validate the performance of our algorithm and compared it with benchmark algorithms.
Similar Papers
Bridging Lifelong and Multi-Task Representation Learning via Algorithm and Complexity Measure
Machine Learning (CS)
Teaches computers to learn new things faster.
Byzantine Resilient Federated Multi-Task Representation Learning
Machine Learning (CS)
Protects shared computer learning from bad actors.
Multitask Learning with Learned Task Relationships
Machine Learning (CS)
Helps computers learn better from different data.