Score: 0

Incentivize Contribution and Learn Parameters Too: Federated Learning with Strategic Data Owners

Published: May 17, 2025 | arXiv ID: 2505.12010v2

By: Drashthi Doshi , Aditya Vema Reddy Kesari , Swaprava Nath and more

Potential Business Impact:

Pays people to help computers learn better.

Business Areas:
Crowdsourcing Collaboration

Classical federated learning (FL) assumes that the clients have a limited amount of noisy data with which they voluntarily participate and contribute towards learning a global, more accurate model in a principled manner. The learning happens in a distributed fashion without sharing the data with the center. However, these methods do not consider the incentive of an agent for participating and contributing to the process, given that data collection and running a distributed algorithm is costly for the clients. The question of rationality of contribution has been asked recently in the literature and some results exist that consider this problem. This paper addresses the question of simultaneous parameter learning and incentivizing contribution, which distinguishes it from the extant literature. Our first mechanism incentivizes each client to contribute to the FL process at a Nash equilibrium and simultaneously learn the model parameters. However, this equilibrium outcome can be away from the optimal, where clients contribute with their full data and the algorithm learns the optimal parameters. We propose a second mechanism with monetary transfers that is budget balanced and enables the full data contribution along with optimal parameter learning. Large scale experiments with real (federated) datasets (CIFAR-10, FeMNIST, and Twitter) show that these algorithms converge quite fast in practice, yield good welfare guarantees, and better model performance for all agents.

Country of Origin
🇮🇳 India

Page Count
19 pages

Category
Computer Science:
CS and Game Theory