Text Rationalization for Robust Causal Effect Estimation
By: Lijinghua Zhang, Hengrui Cai
Recent advances in natural language processing have enabled the increasing use of text data in causal inference, particularly for adjusting confounding factors in treatment effect estimation. Although high-dimensional text can encode rich contextual information, it also poses unique challenges for causal identification and estimation. In particular, the positivity assumption, which requires sufficient treatment overlap across confounder values, is often violated at the observational level, when massive text is represented in feature spaces. Redundant or spurious textual features inflate dimensionality, producing extreme propensity scores, unstable weights, and inflated variance in effect estimates. We address these challenges with Confounding-Aware Token Rationalization (CATR), a framework that selects a sparse necessary subset of tokens using a residual-independence diagnostic designed to preserve confounding information sufficient for unconfoundedness. By discarding irrelevant texts while retaining key signals, CATR mitigates observational-level positivity violations and stabilizes downstream causal effect estimators. Experiments on synthetic data and a real-world study using the MIMIC-III database demonstrate that CATR yields more accurate, stable, and interpretable causal effect estimates than existing baselines.
Similar Papers
Causal Inference on Outcomes Learned from Text
Econometrics
Helps understand what words cause changes.
A Design-based Solution for Causal Inference with Text: Can a Language Model Be Too Large?
Methodology
Shows how saying "I'm humble" changes opinions.
LLM-based Agents for Automated Confounder Discovery and Subgroup Analysis in Causal Inference
Machine Learning (CS)
Helps doctors find best treatments by finding hidden causes.