Targeted Learning Estimation of Sampling Variance for Improved Inference
By: Yunwen Ji, Mark van der Laan, Alan Hubbard
Potential Business Impact:
Makes sure math results are more trustworthy.
For robust statistical inference it is crucial to obtain a good estimator of the variance of the proposed estimator of the statistical estimand. A commonly used estimator of the variance for an asymptotically linear estimator is the sample variance of the estimated influence function. This estimator has been shown to be anti-conservative in limited samples or in the presence of near-positivity violations, leading to elevated Type-I error rates and poor coverage. In this paper, capitalizing on earlier attempts at targeted variance estimators, we propose a one-step targeted variance estimator for the causal risk ratio (CRR) in scenarios involving treatment, outcome, and baseline covariates. While our primary focus is on the variance of log(CRR), our methodology can be extended to other causal effect parameters. Specifically, we focus on the variance of the IF for the log relative risk (log(CRR)) estimator, which requires deriving the efficient influence function for the variance of the IF as the basis for constructing the estimator. Several methods are available to develop efficient estimators of asymptotically linear parameters. In this paper, we concentrate on the so-called one-step targeted maximum likelihood estimator, which is a substitution estimator that utilizes a one-dimensional universal least favorable parametric submodel when updating the distribution. We conduct simulations with different effect sizes, sample sizes and levels of positivity to compare the estimator with existing methods in terms of coverage and Type-I error. Simulation results demonstrate that, especially with small samples and near-positivity violations, the proposed variance estimator offers improved performance, achieving coverage closer to the nominal level of 0.95 and a lower Type-I error rate.
Similar Papers
Asymptotically Efficient Data-adaptive Penalized Shrinkage Estimation with Application to Causal Inference
Methodology
Makes computer guesses more accurate with less data.
The covariance of causal effect estimators for binary v-structures
Statistics Theory
Combines two ways to find causes for better results.
On Efficient Estimation of Distributional Treatment Effects under Covariate-Adaptive Randomization
Econometrics
Improves study results by balancing groups better.