Building Resilient Information Ecosystems: Large LLM-Generated Dataset of Persuasion Attacks
By: Hsien-Te Kao , Aleksey Panasyuk , Peter Bautista and more
Potential Business Impact:
Helps organizations fight fake news faster.
Organization's communication is essential for public trust, but the rise of generative AI models has introduced significant challenges by generating persuasive content that can form competing narratives with official messages from government and commercial organizations at speed and scale. This has left agencies in a reactive position, often unaware of how these models construct their persuasive strategies, making it more difficult to sustain communication effectiveness. In this paper, we introduce a large LLM-generated persuasion attack dataset, which includes 134,136 attacks generated by GPT-4, Gemma 2, and Llama 3.1 on agency news. These attacks span 23 persuasive techniques from SemEval 2023 Task 3, directed toward 972 press releases from ten agencies. The generated attacks come in two mediums, press release statements and social media posts, covering both long-form and short-form communication strategies. We analyzed the moral resonance of these persuasion attacks to understand their attack vectors. GPT-4's attacks mainly focus on Care, with Authority and Loyalty also playing a role. Gemma 2 emphasizes Care and Authority, while Llama 3.1 centers on Loyalty and Care. Analyzing LLM-generated persuasive attacks across models will enable proactive defense, allow to create the reputation armor for organizations, and propel the development of both effective and resilient communications in the information ecosystem.
Similar Papers
A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models
Computation and Language
Helps computers tell if online messages change minds.
A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies
Computation and Language
Computers can now convince voters cheaper than ads.
Persuasiveness and Bias in LLM: Investigating the Impact of Persuasiveness and Reinforcement of Bias in Language Models
Computation and Language
AI learns to trick people, spreading lies.