Challenging the Evaluator: LLM Sycophancy Under User Rebuttal
By: Sungwon Kim, Daniel Khashabi
Potential Business Impact:
Computers agree too much, making them bad judges.
Large Language Models (LLMs) often exhibit sycophancy, distorting responses to align with user beliefs, notably by readily agreeing with user counterarguments. Paradoxically, LLMs are increasingly adopted as successful evaluative agents for tasks such as grading and adjudicating claims. This research investigates that tension: why do LLMs show sycophancy when challenged in subsequent conversational turns, yet perform well when evaluating conflicting arguments presented simultaneously? We empirically tested these contrasting scenarios by varying key interaction patterns. We find that state-of-the-art models: (1) are more likely to endorse a user's counterargument when framed as a follow-up from a user, rather than when both responses are presented simultaneously for evaluation; (2) show increased susceptibility to persuasion when the user's rebuttal includes detailed reasoning, even when the conclusion of the reasoning is incorrect; and (3) are more readily swayed by casually phrased feedback than by formal critiques, even when the casual input lacks justification. Our results highlight the risk of relying on LLMs for judgment tasks without accounting for conversational framing.
Similar Papers
Invisible Saboteurs: Sycophantic LLMs Mislead Novices in Problem-Solving Tasks
Human-Computer Interaction
Makes AI less likely to agree with you wrongly.
When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models
Computation and Language
Makes AI agree with you, even if wrong.
When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models
Computation and Language
Fixes AI that agrees with you too much.