Score: 0

DialDefer: A Framework for Detecting and Mitigating LLM Dialogic Deference

Published: January 15, 2026 | arXiv ID: 2601.10896v1

By: Parisa Rabbani , Priyam Sahoo , Ruben Mathew and more

Potential Business Impact:

AI judges change opinions based on who speaks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

LLMs are increasingly used as third-party judges, yet their reliability when evaluating speakers in dialogue remains poorly understood. We show that LLMs judge identical claims differently depending on framing: the same content elicits different verdicts when presented as a statement to verify ("Is this statement correct?") versus attributed to a speaker ("Is this speaker correct?"). We call this dialogic deference and introduce DialDefer, a framework for detecting and mitigating these framing-induced judgment shifts. Our Dialogic Deference Score (DDS) captures directional shifts that aggregate accuracy obscures. Across nine domains, 3k+ instances, and four models, conversational framing induces large shifts (|DDS| up to 87pp, p < .0001) while accuracy remains stable (<2pp), with effects amplifying 2-4x on naturalistic Reddit conversations. Models can shift toward agreement (deference) or disagreement (skepticism) depending on domain -- the same model ranges from DDS = -53 on graduate-level science to +58 on social judgment. Ablations reveal that human-vs-LLM attribution drives the largest shifts (17.7pp swing), suggesting models treat disagreement with humans as more costly than with AI. Mitigation attempts reduce deference but can over-correct into skepticism, framing this as a calibration problem beyond accuracy optimization.

Country of Origin
🇺🇸 United States

Page Count
35 pages

Category
Computer Science:
Computation and Language