Score: 2

Local Gradient Regulation Stabilizes Federated Learning under Client Heterogeneity

Published: January 7, 2026 | arXiv ID: 2601.03584v1

By: Ping Luo , Jiahuan Wang , Ziqing Wen and more

Potential Business Impact:

Makes AI learn together without sharing private info.

Business Areas:
A/B Testing Data and Analytics

Federated learning (FL) enables collaborative model training across distributed clients without sharing raw data, yet its stability is fundamentally challenged by statistical heterogeneity in realistic deployments. Here, we show that client heterogeneity destabilizes FL primarily by distorting local gradient dynamics during client-side optimization, causing systematic drift that accumulates across communication rounds and impedes global convergence. This observation highlights local gradients as a key regulatory lever for stabilizing heterogeneous FL systems. Building on this insight, we develop a general client-side perspective that regulates local gradient contributions without incurring additional communication overhead. Inspired by swarm intelligence, we instantiate this perspective through Exploratory--Convergent Gradient Re-aggregation (ECGR), which balances well-aligned and misaligned gradient components to preserve informative updates while suppressing destabilizing effects. Theoretical analysis and extensive experiments, including evaluations on the LC25000 medical imaging dataset, demonstrate that regulating local gradient dynamics consistently stabilizes federated learning across state-of-the-art methods under heterogeneous data distributions.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
26 pages

Category
Computer Science:
Machine Learning (CS)