SyncFed: Time-Aware Federated Learning through Explicit Timestamping and Synchronization
By: Baran Can Gül , Stefanos Tziampazis , Nasser Jazdi and more
Potential Business Impact:
Makes AI learn better even with slow internet.
As Federated Learning (FL) expands to larger and more distributed environments, consistency in training is challenged by network-induced delays, clock unsynchronicity, and variability in client updates. This combination of factors may contribute to misaligned contributions that undermine model reliability and convergence. Existing methods like staleness-aware aggregation and model versioning address lagging updates heuristically, yet lack mechanisms to quantify staleness, especially in latency-sensitive and cross-regional deployments. In light of these considerations, we introduce \emph{SyncFed}, a time-aware FL framework that employs explicit synchronization and timestamping to establish a common temporal reference across the system. Staleness is quantified numerically based on exchanged timestamps under the Network Time Protocol (NTP), enabling the server to reason about the relative freshness of client updates and apply temporally informed weighting during aggregation. Our empirical evaluation on a geographically distributed testbed shows that, under \emph{SyncFed}, the global model evolves within a stable temporal context, resulting in improved accuracy and information freshness compared to round-based baselines devoid of temporal semantics.
Similar Papers
Efficient Federated Learning with Timely Update Dissemination
Distributed, Parallel, and Cluster Computing
Improves AI learning from many phones faster.
FTTE: Federated Learning on Resource-Constrained Devices
Machine Learning (CS)
Trains AI faster on small devices.
Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs
Distributed, Parallel, and Cluster Computing
Makes AI learn faster, but some devices lose more privacy.