Incentive Analysis for Agent Participation in Federated Learning
By: Lihui Yi, Xiaochun Niu, Ermin Wei
Potential Business Impact:
Helps AI learn together without sharing private data.
Federated learning offers a decentralized approach to machine learning, where multiple agents collaboratively train a model while preserving data privacy. In this paper, we investigate the decision-making and equilibrium behavior in federated learning systems, where agents choose between participating in global training or conducting independent local training. The problem is first modeled as a stage game and then extended to a repeated game to analyze the long-term dynamics of agent participation. For the stage game, we characterize the participation patterns and identify Nash equilibrium, revealing how data heterogeneity influences the equilibrium behavior-specifically, agents with similar data qualities will participate in FL as a group. We also derive the optimal social welfare and show that it coincides with Nash equilibrium under mild assumptions. In the repeated game, we propose a privacy-preserving, computationally efficient myopic strategy. This strategy enables agents to make practical decisions under bounded rationality and converges to a neighborhood of Nash equilibrium of the stage game in finite time. By combining theoretical insights with practical strategy design, this work provides a realistic and effective framework for guiding and analyzing agent behaviors in federated learning systems.
Similar Papers
Incentive-Based Federated Learning
Machine Learning (CS)
Makes computers learn together without sharing private info.
Incentivize Contribution and Learn Parameters Too: Federated Learning with Strategic Data Owners
CS and Game Theory
Pays people to help computers learn better.
A study on performance limitations in Federated Learning
Machine Learning (CS)
Keeps your data private while training AI.