Score: 1

Benchmarking Mutual Information-based Loss Functions in Federated Learning

Published: April 16, 2025 | arXiv ID: 2504.11877v1

By: Sarang S , Harsh D. Chothani , Qilei Li and more

Potential Business Impact:

Makes AI fairer for everyone using less data.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Federated Learning (FL) has attracted considerable interest due to growing privacy concerns and regulations like the General Data Protection Regulation (GDPR), which stresses the importance of privacy-preserving and fair machine learning approaches. In FL, model training takes place on decentralized data, so as to allow clients to upload a locally trained model and receive a globally aggregated model without exposing sensitive information. However, challenges related to fairness-such as biases, uneven performance among clients, and the "free rider" issue complicates its adoption. In this paper, we examine the use of Mutual Information (MI)-based loss functions to address these concerns. MI has proven to be a powerful method for measuring dependencies between variables and optimizing deep learning models. By leveraging MI to extract essential features and minimize biases, we aim to improve both the fairness and effectiveness of FL systems. Through extensive benchmarking, we assess the impact of MI-based losses in reducing disparities among clients while enhancing the overall performance of FL.

Country of Origin
🇮🇳 🇬🇧 India, United Kingdom

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)