Artificial Intelligence and Civil Discourse: How LLMs Moderate Climate Change Conversations
By: Wenlu Fan, Wentao Xu
Potential Business Impact:
AI calms down online arguments about climate change.
As large language models (LLMs) become increasingly integrated into online platforms and digital communication spaces, their potential to influence public discourse - particularly in contentious areas like climate change - requires systematic investigation. This study examines how LLMs naturally moderate climate change conversations through their distinct communicative behaviors. We conduct a comparative analysis of conversations between LLMs and human users on social media platforms, using five advanced models: three open-source LLMs (Gemma, Llama 3, and Llama 3.3) and two commercial systems (GPT-4o by OpenAI and Claude 3.5 by Anthropic). Through sentiment analysis, we assess the emotional characteristics of responses from both LLMs and humans. The results reveal two key mechanisms through which LLMs moderate discourse: first, LLMs consistently display emotional neutrality, showing far less polarized sentiment than human users. Second, LLMs maintain lower emotional intensity across contexts, creating a stabilizing effect in conversations. These findings suggest that LLMs possess inherent moderating capacities that could improve the quality of public discourse on controversial topics. This research enhances our understanding of how AI might support more civil and constructive climate change discussions and informs the design of AI-assisted communication tools.
Similar Papers
From Moderation to Mediation: Can LLMs Serve as Mediators in Online Flame Wars?
Artificial Intelligence
Helps online arguments become calmer and kinder.
ClimateChat: Designing Data and Methods for Instruction Tuning LLMs to Answer Climate Change Queries
Computation and Language
Creates better AI for climate change questions.
Passing the Turing Test in Political Discourse: Fine-Tuning LLMs to Mimic Polarized Social Media Comments
Computation and Language
AI can create fake, biased posts to trick people.