Score: 1

A Design-based Solution for Causal Inference with Text: Can a Language Model Be Too Large?

Published: October 9, 2025 | arXiv ID: 2510.08758v1

By: Graham Tierney , Srikar Katta , Christopher Bail and more

Potential Business Impact:

Shows how saying "I'm humble" changes opinions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Many social science questions ask how linguistic properties causally affect an audience's attitudes and behaviors. Because text properties are often interlinked (e.g., angry reviews use profane language), we must control for possible latent confounding to isolate causal effects. Recent literature proposes adapting large language models (LLMs) to learn latent representations of text that successfully predict both treatment and the outcome. However, because the treatment is a component of the text, these deep learning methods risk learning representations that actually encode the treatment itself, inducing overlap bias. Rather than depending on post-hoc adjustments, we introduce a new experimental design that handles latent confounding, avoids the overlap issue, and unbiasedly estimates treatment effects. We apply this design in an experiment evaluating the persuasiveness of expressing humility in political communication. Methodologically, we demonstrate that LLM-based methods perform worse than even simple bag-of-words models using our real text and outcomes from our experiment. Substantively, we isolate the causal effect of expressing humility on the perceived persuasiveness of political statements, offering new insights on communication effects for social media platforms, policy makers, and social scientists.

Repos / Data Links

Page Count
36 pages

Category
Statistics:
Methodology