Incentivizing Inclusive Contributions in Model Sharing Markets
By: Enpei Zhang , Jingyi Chai , Rui Ye and more
Potential Business Impact:
Lets people train AI with private data safely.
While data plays a crucial role in training contemporary AI models, it is acknowledged that valuable public data will be exhausted in a few years, directing the world's attention towards the massive decentralized private data. However, the privacy-sensitive nature of raw data and lack of incentive mechanism prevent these valuable data from being fully exploited. Addressing these challenges, this paper proposes inclusive and incentivized personalized federated learning (iPFL), which incentivizes data holders with diverse purposes to collaboratively train personalized models without revealing raw data. iPFL constructs a model-sharing market by solving a graph-based training optimization and incorporates an incentive mechanism based on game theory principles. Theoretical analysis shows that iPFL adheres to two key incentive properties: individual rationality and truthfulness. Empirical studies on eleven AI tasks (e.g., large language models' instruction-following tasks) demonstrate that iPFL consistently achieves the highest economic utility, and better or comparable model performance compared to baseline methods. We anticipate that our iPFL can serve as a valuable technique for boosting future AI models on decentralized private data while making everyone satisfied.
Similar Papers
Advancing Personalized Federated Learning: Integrative Approaches with AI for Enhanced Privacy and Customization
Machine Learning (CS)
Makes AI smarter without sharing your private data.
Privacy Preserving Machine Learning Model Personalization through Federated Personalized Learning
Machine Learning (CS)
Keeps your private data safe when AI learns.
A Lightweight and Secure Deep Learning Model for Privacy-Preserving Federated Learning in Intelligent Enterprises
Cryptography and Security
Makes smart devices learn together securely and faster.