Accelerated Methods with Complexity Separation Under Data Similarity for Federated Learning Problems
By: Dmitry Bylinkin , Sergey Skorik , Dmitriy Bystrov and more
Potential Business Impact:
Makes computers learn together without sharing private data.
Heterogeneity within data distribution poses a challenge in many modern federated learning tasks. We formalize it as an optimization problem involving a computationally heavy composite under data similarity. By employing different sets of assumptions, we present several approaches to develop communication-efficient methods. An optimal algorithm is proposed for the convex case. The constructed theory is validated through a series of experiments across various problems.
Similar Papers
Adaptive Federated Learning to Optimize the MultiCast flows in Data Centers
Systems and Control
Saves energy by smarter computer center teamwork.
Non-Convex Federated Optimization under Cost-Aware Client Selection
Machine Learning (CS)
Makes AI learn faster with less talking.
Federated Online Learning for Heterogeneous Multisource Streaming Data
Machine Learning (Stat)
Learns from many computers without sharing private data.