Correct and Weight: A Simple Yet Effective Loss for Implicit Feedback Recommendation
By: Minglei Yin , Chuanbo Hu , Bin Liu and more
Potential Business Impact:
Shows you better movie picks by fixing bad guesses.
Learning from implicit feedback has become the standard paradigm for modern recommender systems. However, this setting is fraught with the persistent challenge of false negatives, where unobserved user-item interactions are not necessarily indicative of negative preference. To address this issue, this paper introduces a novel and principled loss function, named Corrected and Weighted (CW) loss, that systematically corrects for the impact of false negatives within the training objective. Our approach integrates two key techniques. First, inspired by Positive-Unlabeled learning, we debias the negative sampling process by re-calibrating the assumed negative distribution. By theoretically approximating the true negative distribution (p-) using the observable general data distribution (p) and the positive interaction distribution (p^+), our method provides a more accurate estimate of the likelihood that a sampled unlabeled item is truly negative. Second, we introduce a dynamic re-weighting mechanism that modulates the importance of each negative instance based on the model's current prediction. This scheme encourages the model to enforce a larger ranking margin between positive items and confidently predicted (i.e., easy) negative items, while simultaneously down-weighting the penalty on uncertain negatives that have a higher probability of being false negatives. A key advantage of our approach is its elegance and efficiency; it requires no complex modifications to the data sampling process or significant computational overhead, making it readily applicable to a wide array of existing recommendation models. Extensive experiments conducted on four large-scale, sparse benchmark datasets demonstrate the superiority of our proposed loss. The results show that our method consistently and significantly outperforms a suite of state-of-the-art loss functions across multiple ranking-oriented metrics.
Similar Papers
Evolved SampleWeights for Bias Mitigation: Effectiveness Depends on Optimization Objectives
Machine Learning (CS)
Fixes unfair computer guesses by changing how data is used.
Improving Semi-Supervised Contrastive Learning via Entropy-Weighted Confidence Integration of Anchor-Positive Pairs
Machine Learning (CS)
Teaches computers to learn better with less information.
Cost-Sensitive Unbiased Risk Estimation for Multi-Class Positive-Unlabeled Learning
Machine Learning (CS)
Helps computers learn from good and unknown examples.