A Concise Agent is Less Expert: Revealing Side Effects of Using Style Features on Conversational Agents
By: Young-Min Cho , Yuan Yuan , Sharath Chandra Guntuku and more
Potential Business Impact:
Makes AI talk nicely without losing important facts.
Style features such as friendly, helpful, or concise are widely used in prompts to steer the behavior of Large Language Model (LLM) conversational agents, yet their unintended side effects remain poorly understood. In this work, we present the first systematic study of cross-feature stylistic side effects. We conduct a comprehensive survey of 127 conversational agent papers from ACL Anthology and identify 12 frequently used style features. Using controlled, synthetic dialogues across task-oriented and open domain settings, we quantify how prompting for one style feature causally affects others via a pairwise LLM as a Judge evaluation framework. Our results reveal consistent and structured side effects, such as prompting for conciseness significantly reduces perceived expertise. They demonstrate that style features are deeply entangled rather than orthogonal. To support future research, we introduce CASSE (Conversational Agent Stylistic Side Effects), a dataset capturing these complex interactions. We further evaluate prompt based and activation steering based mitigation strategies and find that while they can partially restore suppressed traits, they often degrade the primary intended style. These findings challenge the assumption of faithful style control in LLMs and highlight the need for multi-objective and more principled approaches to safe, targeted stylistic steering in conversational agents.
Similar Papers
Substance over Style: Evaluating Proactive Conversational Coaching Agents
Computation and Language
Helps AI coaches talk better with people.
Vibe Check: Understanding the Effects of LLM-Based Conversational Agents' Personality and Alignment on User Perceptions in Goal-Oriented Tasks
Human-Computer Interaction
Makes chatbots more likable with just enough personality.
Evaluating the Effectiveness of Large Language Models in Solving Simple Programming Tasks: A User-Centered Study
Human-Computer Interaction
AI helps students code faster by talking with them.