Mechanism Design for Federated Learning with Non-Monotonic Network Effects
By: Xiang Li , Bing Luo , Jianwei Huang and more
Potential Business Impact:
Helps AI learn better by sharing models.
Mechanism design is pivotal to federated learning (FL) for maximizing social welfare by coordinating self-interested clients. Existing mechanisms, however, often overlook the network effects of client participation and the diverse model performance requirements (i.e., generalization error) across applications, leading to suboptimal incentives and social welfare, or even inapplicability in real deployments. To address this gap, we explore incentive mechanism design for FL with network effects and application-specific requirements of model performance. We develop a theoretical model to quantify the impact of network effects on heterogeneous client participation, revealing the non-monotonic nature of such effects. Based on these insights, we propose a Model Trading and Sharing (MoTS) framework, which enables clients to obtain FL models through either participation or purchase. To further address clients' strategic behaviors, we design a Social Welfare maximization with Application-aware and Network effects (SWAN) mechanism, exploiting model customer payments for incentivization. Experimental results on a hardware prototype demonstrate that our SWAN mechanism outperforms existing FL mechanisms, improving social welfare by up to $352.42\%$ and reducing extra incentive costs by $93.07\%$.
Similar Papers
A Service-Oriented Adaptive Hierarchical Incentive Mechanism for Federated Learning
Machine Learning (CS)
Pays people to help train smart computer programs.
Hierarchical Federated Learning for Social Network with Mobility
Machine Learning (CS)
Learns from phones without seeing your private stuff.
Incentive-Compatible Federated Learning with Stackelberg Game Modeling
Machine Learning (CS)
Makes AI learn fairly for everyone.