Detecting Winning Arguments with Large Language Models and Persuasion Strategies
By: Tiziano Labruna , Arkadiusz Modzelewski , Giorgio Satta and more
Detecting persuasion in argumentative text is a challenging task with important implications for understanding human communication. This work investigates the role of persuasion strategies - such as Attack on reputation, Distraction, and Manipulative wording - in determining the persuasiveness of a text. We conduct experiments on three annotated argument datasets: Winning Arguments (built from the Change My View subreddit), Anthropic/Persuasion, and Persuasion for Good. Our approach leverages large language models (LLMs) with a Multi-Strategy Persuasion Scoring approach that guides reasoning over six persuasion strategies. Results show that strategy-guided reasoning improves the prediction of persuasiveness. To better understand the influence of content, we organize the Winning Argument dataset into broad discussion topics and analyze performance across them. We publicly release this topic-annotated version of the dataset to facilitate future research. Overall, our methodology demonstrates the value of structured, strategy-aware prompting for enhancing interpretability and robustness in argument quality assessment.
Similar Papers
A Generalizable Rhetorical Strategy Annotation Model Using LLM-based Debate Simulation and Labelling
Computation and Language
Finds how speakers try to convince you.
Can AI-Generated Persuasion Be Detected? Persuaficial Benchmark and AI vs. Human Linguistic Differences
Computation and Language
Makes fake persuasive writing harder to spot.
How Persuasive Could LLMs Be? A First Study Combining Linguistic-Rhetorical Analysis and User Experiments
Human-Computer Interaction
AI arguments don't change minds on tough topics.