Analysing Differences in Persuasive Language in LLM-Generated Text: Uncovering Stereotypical Gender Patterns
By: Amalie Brogaard Pauli , Maria Barrett , Max Müller-Eberstein and more
Potential Business Impact:
Computers write messages that change how people think.
Large language models (LLMs) are increasingly used for everyday communication tasks, including drafting interpersonal messages intended to influence and persuade. Prior work has shown that LLMs can successfully persuade humans and amplify persuasive language. It is therefore essential to understand how user instructions affect the generation of persuasive language, and to understand whether the generated persuasive language differs, for example, when targeting different groups. In this work, we propose a framework for evaluating how persuasive language generation is affected by recipient gender, sender intent, or output language. We evaluate 13 LLMs and 16 languages using pairwise prompt instructions. We evaluate model responses on 19 categories of persuasive language using an LLM-as-judge setup grounded in social psychology and communication science. Our results reveal significant gender differences in the persuasive language generated across all models. These patterns reflect biases consistent with gender-stereotypical linguistic tendencies documented in social psychology and sociolinguistics.
Similar Papers
Can AI-Generated Persuasion Be Detected? Persuaficial Benchmark and AI vs. Human Linguistic Differences
Computation and Language
Makes fake persuasive writing harder to spot.
A Meta-Analysis of the Persuasive Power of Large Language Models
Human-Computer Interaction
Computers persuade people as well as humans.
Investigating Gender Bias in LLM-Generated Stories via Psychological Stereotypes
Computation and Language
Finds how stories show gender bias.