Knowing Your Uncertainty -- On the application of LLM in social sciences
By: Bolun Zhang , Linzhuo Li , Yunqi Chen and more
Large language models (LLMs) are rapidly being integrated into computational social science research, yet their blackboxed training and designed stochastic elements in inference pose unique challenges for scientific inquiry. This article argues that applying LLMs to social scientific tasks requires explicit assessment of uncertainty-an expectation long established in both quantitative methodology in the social sciences and machine learning. We introduce a unified framework for evaluating LLM uncertainty along two dimensions: the task type (T), which distinguishes between classification, short-form, and long-form generation, and the validation type (V), which captures the availability of reference data or evaluative criteria. Drawing from both computer science and social science literature, we map existing uncertainty quantification (UQ) methods to this T-V typology and offer practical recommendations for researchers. Our framework provides both a methodological safeguard and a practical guide for integrating LLMs into rigorous social science research.
Similar Papers
From Calibration to Collaboration: LLM Uncertainty Quantification Should Be More Human-Centered
Computation and Language
Helps people know when to trust AI answers.
Mapping Clinical Doubt: Locating Linguistic Uncertainty in LLMs
Computation and Language
Helps AI understand when doctors are unsure.
The Illusion of Certainty: Uncertainty quantification for LLMs fails under ambiguity
Machine Learning (CS)
Makes AI understand when it's unsure.