Score: 0

Convergence Guarantees for Federated SARSA with Local Training and Heterogeneous Agents

Published: December 19, 2025 | arXiv ID: 2512.17688v1

By: Paul Mangold, Eloïse Berthier, Eric Moulines

We present a novel theoretical analysis of Federated SARSA (FedSARSA) with linear function approximation and local training. We establish convergence guarantees for FedSARSA in the presence of heterogeneity, both in local transitions and rewards, providing the first sample and communication complexity bounds in this setting. At the core of our analysis is a new, exact multi-step error expansion for single-agent SARSA, which is of independent interest. Our analysis precisely quantifies the impact of heterogeneity, demonstrating the convergence of FedSARSA with multiple local updates. Crucially, we show that FedSARSA achieves linear speed-up with respect to the number of agents, up to higher-order terms due to Markovian sampling. Numerical experiments support our theoretical findings.

Category
Computer Science:
Machine Learning (CS)