Comparing human and LLM politeness strategies in free production
By: Haoran Zhao, Robert D. Hawkins
Potential Business Impact:
Computers learn to talk nicely, but sometimes too much.
Polite speech poses a fundamental alignment challenge for large language models (LLMs). Humans deploy a rich repertoire of linguistic strategies to balance informational and social goals -- from positive approaches that build rapport (compliments, expressions of interest) to negative strategies that minimize imposition (hedging, indirectness). We investigate whether LLMs employ a similarly context-sensitive repertoire by comparing human and LLM responses in both constrained and open-ended production tasks. We find that larger models ($\ge$70B parameters) successfully replicate key preferences from the computational pragmatics literature, and human evaluators surprisingly prefer LLM-generated responses in open-ended contexts. However, further linguistic analyses reveal that models disproportionately rely on negative politeness strategies even in positive contexts, potentially leading to misinterpretations. While modern LLMs demonstrate an impressive handle on politeness strategies, these subtle differences raise important questions about pragmatic alignment in AI systems.
Similar Papers
Evaluating Behavioral Alignment in Conflict Dialogue: A Multi-Dimensional Comparison of LLM Agents and Humans
Computation and Language
AI learns to argue and negotiate like people.
Evaluating LLM-Generated Versus Human-Authored Responses in Role-Play Dialogues
Computation and Language
Computers get worse at talking over time.
Human Preferences for Constructive Interactions in Language Model Alignment
Human-Computer Interaction
Teaches AI to talk nicely to everyone.