Counterfactual Risk Minimization with IPS-Weighted BPR and Self-Normalized Evaluation in Recommender Systems
By: Rahul Raja, Arpita Vats
Potential Business Impact:
Makes online suggestions more helpful and fair.
Learning and evaluating recommender systems from logged implicit feedback is challenging due to exposure bias. While inverse propensity scoring (IPS) corrects this bias, it often suffers from high variance and instability. In this paper, we present a simple and effective pipeline that integrates IPS-weighted training with an IPS-weighted Bayesian Personalized Ranking (BPR) objective augmented by a Propensity Regularizer (PR). We compare Direct Method (DM), IPS, and Self-Normalized IPS (SNIPS) for offline policy evaluation, and demonstrate how IPS-weighted training improves model robustness under biased exposure. The proposed PR further mitigates variance amplification from extreme propensity weights, leading to more stable estimates. Experiments on synthetic and MovieLens 100K data show that our approach generalizes better under unbiased exposure while reducing evaluation variance compared to naive and standard IPS methods, offering practical guidance for counterfactual learning and evaluation in real-world recommendation settings.
Similar Papers
LLMs for estimating positional bias in logged interaction data
Information Retrieval
Makes online lists show better, fairer results.
Variational Bayesian Personalized Ranking
Information Retrieval
Makes online suggestions show you better things.
Document Similarity Enhanced IPS Estimation for Unbiased Learning to Rank
Information Retrieval
Makes search results fairer by fixing user bias.