Score: 1

Federated Latent Factor Model for Bias-Aware Recommendation with Privacy-Preserving

Published: April 21, 2025 | arXiv ID: 2504.15090v1

By: Junxiang Gao, Yixin Ran, Jia Chen

Potential Business Impact:

Keeps your private data safe while recommending things.

Business Areas:
Facial Recognition Data and Analytics, Software

A recommender system (RS) aims to provide users with personalized item recommendations, enhancing their overall experience. Traditional RSs collect and process all user data on a central server. However, this centralized approach raises significant privacy concerns, as it increases the risk of data breaches and privacy leakages, which are becoming increasingly unacceptable to privacy-sensitive users. To address these privacy challenges, federated learning has been integrated into RSs, ensuring that user data remains secure. In centralized RSs, the issue of rating bias is effectively addressed by jointly analyzing all users' raw interaction data. However, this becomes a significant challenge in federated RSs, as raw data is no longer accessible due to privacy-preserving constraints. To overcome this problem, we propose a Federated Bias-Aware Latent Factor (FBALF) model. In FBALF, training bias is explicitly incorporated into every local model's loss function, allowing for the effective elimination of rating bias without compromising data privacy. Extensive experiments conducted on three real-world datasets demonstrate that FBALF achieves significantly higher recommendation accuracy compared to other state-of-the-art federated RSs.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Machine Learning (CS)