When Is Causal Inference Possible? A Statistical Test for Unmeasured Confounding
By: Muye Liu, Jun Xie
Potential Business Impact:
Checks if data can show cause and effect.
This paper clarifies a fundamental difference between causal inference and traditional statistical inference by formalizing a mathematical distinction between their respective parameters. We connect two major approaches to causal inference, the potential outcomes framework and causal structure graphs, which are typically studied separately. While the unconfoundedness assumption in the potential outcomes framework cannot be assessed from an observational dataset alone, causal structure graphs help explain when causal effects are identifiable through graphical models. We propose a statistical test to assess the unconfoundedness assumption, equivalent to the absence of unmeasured confounding, by comparing two datasets: a randomized controlled trial and an observational study. The test controls the Type I error probability, and we analyze its power under linear models. Our approach provides a practical method to evaluate when real-world data are suitable for causal inference.
Similar Papers
Multiple Regression Analysis of Unmeasured Confounding
Methodology
Find hidden causes affecting results.
A Causal Inference Framework for Data Rich Environments
Econometrics
Helps understand what *would have* happened.
A Sensitivity Analysis Framework for Causal Inference Under Interference
Methodology
Finds hidden problems affecting results.