A Design-based Solution for Causal Inference with Text: Can a Language Model Be Too Large?
By: Graham Tierney , Srikar Katta , Christopher Bail and more
Potential Business Impact:
Shows how saying "I'm humble" changes opinions.
Many social science questions ask how linguistic properties causally affect an audience's attitudes and behaviors. Because text properties are often interlinked (e.g., angry reviews use profane language), we must control for possible latent confounding to isolate causal effects. Recent literature proposes adapting large language models (LLMs) to learn latent representations of text that successfully predict both treatment and the outcome. However, because the treatment is a component of the text, these deep learning methods risk learning representations that actually encode the treatment itself, inducing overlap bias. Rather than depending on post-hoc adjustments, we introduce a new experimental design that handles latent confounding, avoids the overlap issue, and unbiasedly estimates treatment effects. We apply this design in an experiment evaluating the persuasiveness of expressing humility in political communication. Methodologically, we demonstrate that LLM-based methods perform worse than even simple bag-of-words models using our real text and outcomes from our experiment. Substantively, we isolate the causal effect of expressing humility on the perceived persuasiveness of political statements, offering new insights on communication effects for social media platforms, policy makers, and social scientists.
Similar Papers
Can Large Language Models Help Experimental Design for Causal Discovery?
Artificial Intelligence
Lets computers find science answers faster.
A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models
Computation and Language
Helps computers tell if online messages change minds.
BiasCause: Evaluate Socially Biased Causal Reasoning of Large Language Models
Computation and Language
Finds why computers say unfair things.