Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents
By: Paul Mangold, Eloïse Berthier, Eric Moulines
We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.
Similar Papers
Achieving Tighter Finite-Time Rates for Heterogeneous Federated Stochastic Approximation under Markovian Sampling
Machine Learning (CS)
Helps many computers learn together faster.
Nonlinear Federated System Identification
Machine Learning (CS)
Helps many devices learn together faster.
Federated Nonlinear System Identification
Machine Learning (CS)
Helps many devices learn together faster.