Can LLMs Lie? Investigation beyond Hallucination
By: Haoran Huan , Mihir Prabhudesai , Mengning Wu and more
Potential Business Impact:
Teaches AI to lie or tell truth.
Large language models (LLMs) have demonstrated impressive capabilities across a variety of tasks, but their increasing autonomy in real-world applications raises concerns about their trustworthiness. While hallucinations-unintentional falsehoods-have been widely studied, the phenomenon of lying, where an LLM knowingly generates falsehoods to achieve an ulterior objective, remains underexplored. In this work, we systematically investigate the lying behavior of LLMs, differentiating it from hallucinations and testing it in practical scenarios. Through mechanistic interpretability techniques, we uncover the neural mechanisms underlying deception, employing logit lens analysis, causal interventions, and contrastive activation steering to identify and control deceptive behavior. We study real-world lying scenarios and introduce behavioral steering vectors that enable fine-grained manipulation of lying tendencies. Further, we explore the trade-offs between lying and end-task performance, establishing a Pareto frontier where dishonesty can enhance goal optimization. Our findings contribute to the broader discourse on AI ethics, shedding light on the risks and potential safeguards for deploying LLMs in high-stakes environments. Code and more illustrations are available at https://llm-liar.github.io/
Similar Papers
Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
Machine Learning (CS)
Finds when AI lies about hard problems.
When Thinking LLMs Lie: Unveiling the Strategic Deception in Representations of Reasoning Models
Artificial Intelligence
Teaches AI to tell the truth, not lie.
Thinking, Faithful and Stable: Mitigating Hallucinations in LLMs
Artificial Intelligence
Makes AI think more carefully and be more truthful.