Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
By: Zhaomin Wu , Mingzhe Du , See-Kiong Ng and more
Potential Business Impact:
Finds when AI lies about hard problems.
Large Language Models (LLMs) have been widely deployed in reasoning, planning, and decision-making tasks, making their trustworthiness a critical concern. The potential for intentional deception, where an LLM deliberately fabricates or conceals information to serve a hidden objective, remains a significant and underexplored threat. Existing studies typically induce such deception by explicitly setting a "hidden" objective through prompting or fine-tuning, which may not fully reflect real-world human-LLM interactions. Moving beyond this human-induced deception, we investigate LLMs' self-initiated deception on benign prompts. To address the absence of ground truth in this evaluation, we propose a novel framework using "contact searching questions." This framework introduces two statistical metrics derived from psychological principles to quantify the likelihood of deception. The first, the Deceptive Intention Score, measures the model's bias towards a hidden objective. The second, Deceptive Behavior Score, measures the inconsistency between the LLM's internal belief and its expressed output. Upon evaluating 14 leading LLMs, we find that both metrics escalate as task difficulty increases, rising in parallel for most models. Building on these findings, we formulate a mathematical model to explain this behavior. These results reveal that even the most advanced LLMs exhibit an increasing tendency toward deception when handling complex problems, raising critical concerns for the deployment of LLM agents in complex and crucial domains.
Similar Papers
Can LLMs Lie? Investigation beyond Hallucination
Machine Learning (CS)
Teaches AI to lie or tell truth.
Evaluating & Reducing Deceptive Dialogue From Language Models with Multi-turn RL
Computation and Language
Makes AI less likely to lie to people.
Do Large Language Models Exhibit Spontaneous Rational Deception?
Computation and Language
Smart AI sometimes lies when it benefits them.