LLM-Supported Content Analysis of Motivated Reasoning on Climate Change
By: Yuheun Kim, Qiaoyi Liu, Jeff Hemsley
Potential Business Impact:
Helps understand why people argue about climate change.
Public discourse around climate change remains polarized despite scientific consensus on anthropogenic climate change (ACC). This study examines how "believers" and "skeptics" of ACC differ in their YouTube comment discourse. We analyzed 44,989 comments from 30 videos using a large language model (LLM) as a qualitative annotator, identifying ten distinct topics. These annotations were combined with social network analysis to examine engagement patterns. A linear mixed-effects model showed that comments about government policy and natural cycles generated significantly lower interaction compared to misinformation, suggesting these topics are ideologically settled points within communities. These patterns reflect motivated reasoning, where users selectively engage with content that aligns with their identity and beliefs. Our findings highlight the utility of LLMs for large-scale qualitative analysis and highlight how climate discourse is shaped not only by content, but by underlying cognitive and ideological motivations.
Similar Papers
Artificial Intelligence and Civil Discourse: How LLMs Moderate Climate Change Conversations
Computers and Society
AI calms down online arguments about climate change.
Assessing LLM Reasoning Through Implicit Causal Chain Discovery in Climate Discourse
Artificial Intelligence
Computers learn to explain how things happen.
ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries
Computation and Language
Creates better AI for climate change questions.