Local Gradient Regulation Stabilizes Federated Learning under Client Heterogeneity
By: Ping Luo , Jiahuan Wang , Ziqing Wen and more
Potential Business Impact:
Makes AI learn together without sharing private info.
Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its stability is fundamentally challenged by statistical heterogeneity in realistic deployments. Here, we show that client heterogeneity destabilizes FL primarily by distorting local gradient dynamics during client-side optimization, causing systematic drift that accumulates across communication rounds and impedes global convergence. This observation highlights local gradients as a key regulatory lever for stabilizing heterogeneous FL systems. Building on this insight, we develop a general client-side perspective that regulates local gradient contributions without incurring additional communication overhead. Inspired by swarm intelligence, we instantiate this perspective through Exploratory--Convergent Gradient Re-aggregation (ECGR), which balances well-aligned and misaligned gradient components to preserve informative updates while suppressing destabilizing effects. Theoretical analysis and extensive experiments, including evaluations on the LC25000 medical imaging dataset, demonstrate that regulating local gradient dynamics consistently stabilizes federated learning across state-of-the-art methods under heterogeneous data distributions.
Similar Papers
GC-Fed: Gradient Centralized Federated Learning with Partial Client Participation
Machine Learning (CS)
Keeps AI learning fair even with different data.
Fairness Regularization in Federated Learning
Machine Learning (CS)
Makes AI learn fairly from everyone's data.
Data Heterogeneity-Aware Client Selection for Federated Learning in Wireless Networks
Distributed, Parallel, and Cluster Computing
Helps phones train AI without sharing private data.