Practical Improvements of A/B Testing with Off-Policy Estimation
By: Otmane Sakhi, Alexandre Gilotte, David Rohde
Potential Business Impact:
Tests website changes better, finding best options faster.
We address the problem of A/B testing, a widely used protocol for evaluating the potential improvement achieved by a new decision system compared to a baseline. This protocol segments the population into two subgroups, each exposed to a version of the system and estimates the improvement as the difference between the measured effects. In this work, we demonstrate that the commonly used difference-in-means estimator, while unbiased, can be improved. We introduce a family of unbiased off-policy estimators that achieves lower variance than the standard approach. Among this family, we identify the estimator with the lowest variance. The resulting estimator is simple, and offers substantial variance reduction when the two tested systems exhibit similarities. Our theoretical analysis and experimental results validate the effectiveness and practicality of the proposed method.
Similar Papers
Beyond Basic A/B testing: Improving Statistical Efficiency for Business Growth
Methodology
Improves website tests for better business results.
A Two-armed Bandit Framework for A/B Testing
Machine Learning (Stat)
Tests new ideas faster and more reliably.
Uncertainty Quantification and Causal Considerations for Off-Policy Decision Making
Machine Learning (Stat)
Helps computers learn from old data safely.