The Bias-Variance Tradeoff in Long-Term Experimentation
By: Daniel Ting, Kenneth Hung
Potential Business Impact:
Improves long-term decisions by balancing precision and bias.
As we exhaust methods that reduces variance without introducing bias, reducing variance in experiments often requires accepting some bias, using methods like winsorization or surrogate metrics. While this bias-variance tradeoff can be optimized for individual experiments, bias may accumulate over time, raising concerns for long-term optimization. We analyze whether bias is ever acceptable when it can accumulate, and show that a bias-variance tradeoff persists in long-term settings. Improving signal-to-noise remains beneficial, even if it introduces bias. This implies we should shift from thinking there is a single ``correct'', unbiased metric to thinking about how to make the best estimates and decisions when better precision can be achieved at the expense of bias. Furthermore, our model adds nuance to previous findings that suggest less stringent launch criterion leads to improved gains. We show while this is beneficial when the system is far from the optimum, more stringent launch criterion is preferable as the system matures.
Similar Papers
When three experiments are better than two: Avoiding intractable correlated aleatoric uncertainty by leveraging a novel bias--variance tradeoff
Machine Learning (CS)
Helps computers learn faster with noisy data.
The bias of IID resampled backtests for rolling-window mean-variance portfolios
Portfolio Management
Fixes money predictions that use old data.
The Bias-Variance Tradeoff in Data-Driven Optimization: A Local Misspecification Perspective
Machine Learning (Stat)
Improves computer learning by balancing guessing and certainty.