Federated Latent Factor Model for Bias-Aware Recommendation with Privacy-Preserving
By: Junxiang Gao, Yixin Ran, Jia Chen
Potential Business Impact:
Keeps your private data safe while recommending things.
A recommender system (RS) aims to provide users with personalized item recommendations, enhancing their overall experience. Traditional RSs collect and process all user data on a central server. However, this centralized approach raises significant privacy concerns, as it increases the risk of data breaches and privacy leakages, which are becoming increasingly unacceptable to privacy-sensitive users. To address these privacy challenges, federated learning has been integrated into RSs, ensuring that user data remains secure. In centralized RSs, the issue of rating bias is effectively addressed by jointly analyzing all users' raw interaction data. However, this becomes a significant challenge in federated RSs, as raw data is no longer accessible due to privacy-preserving constraints. To overcome this problem, we propose a Federated Bias-Aware Latent Factor (FBALF) model. In FBALF, training bias is explicitly incorporated into every local model's loss function, allowing for the effective elimination of rating bias without compromising data privacy. Extensive experiments conducted on three real-world datasets demonstrate that FBALF achieves significantly higher recommendation accuracy compared to other state-of-the-art federated RSs.
Similar Papers
Beyond Personalization: Federated Recommendation with Calibration via Low-rank Decomposition
Cryptography and Security
Keeps your movie picks private, still suggests good movies.
Privacy-Preserving Federated Learning Framework for Risk-Based Adaptive Authentication
Cryptography and Security
Ke
Privacy-Preserving Federated Learning Framework for Risk-Based Adaptive Authentication
Cryptography and Security
Keeps your online accounts safe without sharing your secrets.