Score: 0

Smoothing ADMM for Non-convex and Non-smooth Hierarchical Federated Learning

Published: March 11, 2025 | arXiv ID: 2503.08869v1

By: Reza Mirzaeifard, Stefan Werner

Potential Business Impact:

Trains AI smarter and faster with different data.

Business Areas:
Virtual Assistant Software

This paper presents a hierarchical federated learning (FL) framework that extends the alternating direction method of multipliers (ADMM) with smoothing techniques, tailored for non-convex and non-smooth objectives. Unlike traditional hierarchical FL methods, our approach supports asynchronous updates and multiple updates per iteration, enhancing adaptability to heterogeneous data and system settings. Additionally, we introduce a flexible mechanism to leverage diverse regularization functions at each layer, allowing customization to the specific prior information within each cluster and accommodating (possibly) non-smooth penalty objectives. Depending on the learning goal, the framework supports both consensus and personalization: the total variation norm can be used to enforce consensus across layers, while non-convex penalties such as minimax concave penalty (MCP) or smoothly clipped absolute deviation (SCAD) enable personalized learning. Experimental results demonstrate the superior convergence rates and accuracy of our method compared to conventional approaches, underscoring its robustness and versatility for a wide range of FL scenarios.

Country of Origin
🇳🇴 Norway

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)