Mind the Gap: Linguistic Divergence and Adaptation Strategies in Human-LLM Assistant vs. Human-Human Interactions
By: Fulei Zhang, Zhou Yu
Potential Business Impact:
Teaches chatbots to talk like people do.
As Large Language Models (LLMs) are increasingly deployed in customer-facing applications, a critical yet underexplored question is how users communicate differently with LLM chatbots compared to human agent. In this study, we present empirical evidence that users adopt distinct communication styles when users interact with chatbots versus human agents. Our analysis reveals significant differences in grammatical fluency, politeness, and lexical diversity in user language between the two settings. These findings suggest that models trained exclusively on human-human interaction data may not adequately accommodate the communication style shift that occurs once an LLM chatbot is deployed. To enhance LLM robustness to post-launch communication style changes, we experimented with two strategies: (1) data augmentation during the post-training phase and (2) inference-time user message reformulation. Our results indicate that models trained on stylistically diverse datasets significantly outperform those trained exclusively on original or stylistically uniform datasets, while inference-time reformulation proved less effective. These insights help us to better adapt our models for improved LLM-user interaction experiences.
Similar Papers
LLMs syntactically adapt their language use to their conversational partner
Computation and Language
Computers copy how people talk to each other.
Flipping the Dialogue: Training and Evaluating User Language Models
Computation and Language
Makes AI better at talking like real people.
How human is the machine? Evidence from 66,000 Conversations with Large Language Models
Human-Computer Interaction
AI sometimes thinks differently than people.