Beyond ATE: Multi-Criteria Design for A/B Testing
By: Jiachun Li, Kaining Shi, David Simchi-Levi
Potential Business Impact:
Tests help make more money and keep data private.
A/B testing is a widely adopted methodology for estimating conditional average treatment effects (CATEs) in both clinical trials and online platforms. While most existing research has focused primarily on maximizing estimation accuracy, practical applications must also account for additional objectives-most notably welfare or revenue loss. In many settings, it is critical to administer treatments that improve patient outcomes or to implement plans that generate greater revenue from customers. Within a machine learning framework, such objectives are naturally captured through the notion of cumulative regret. In this paper, we investigate the fundamental trade-off between social welfare loss and statistical accuracy in (adaptive) experiments with heterogeneous treatment effects. We establish matching upper and lower bounds for the resulting multi-objective optimization problem and employ the concept of Pareto optimality to characterize the necessary and sufficient conditions for optimal experimental designs. Beyond estimating CATEs, practitioners often aim to deploy treatment policies that maximize welfare across the entire population. We demonstrate that our Pareto-optimal adaptive design achieves optimal post-experiment welfare, irrespective of the in-experiment trade-off between accuracy and welfare. Furthermore, since clinical and commercial data are often highly sensitive, it is essential to incorporate robust privacy guarantees into any treatment-allocation mechanism. To this end, we develop differentially private algorithms that continue to achieve our established lower bounds, showing that privacy can be attained at negligible cost.
Similar Papers
Learning Across Experiments and Time: Tackling Heterogeneity in A/B Testing
Methodology
Makes online tests give truer results sooner.
Bayesian Semiparametric Causal Inference: Targeted Doubly Robust Estimation of Treatment Effects
Methodology
Finds true effects from messy data.
Conditional cross-fitting for unbiased machine-learning-assisted covariate adjustment in randomized experiments
Methodology
Makes study results more accurate with less data.