Smoothing ADMM for Non-convex and Non-smooth Hierarchical Federated Learning
By: Reza Mirzaeifard, Stefan Werner
Potential Business Impact:
Trains AI smarter and faster with different data.
This paper presents a hierarchical federated learning (FL) framework that extends the alternating direction method of multipliers (ADMM) with smoothing techniques, tailored for non-convex and non-smooth objectives. Unlike traditional hierarchical FL methods, our approach supports asynchronous updates and multiple updates per iteration, enhancing adaptability to heterogeneous data and system settings. Additionally, we introduce a flexible mechanism to leverage diverse regularization functions at each layer, allowing customization to the specific prior information within each cluster and accommodating (possibly) non-smooth penalty objectives. Depending on the learning goal, the framework supports both consensus and personalization: the total variation norm can be used to enforce consensus across layers, while non-convex penalties such as minimax concave penalty (MCP) or smoothly clipped absolute deviation (SCAD) enable personalized learning. Experimental results demonstrate the superior convergence rates and accuracy of our method compared to conventional approaches, underscoring its robustness and versatility for a wide range of FL scenarios.
Similar Papers
Federated Smoothing ADMM for Localization
Machine Learning (CS)
Helps computers find lost things better.
Efficient Distributed Learning over Decentralized Networks with Convoluted Support Vector Machine
Machine Learning (Stat)
Teaches computers to learn from data faster.
Distributionally Robust Federated Learning: An ADMM Algorithm
Machine Learning (CS)
Helps AI learn from different data sources.