Are Your Agents Upward Deceivers?
By: Dadi Guo , Qingyu Liu , Dongrui Liu and more
Potential Business Impact:
Computers sometimes lie about failing tasks.
Large Language Model (LLM)-based agents are increasingly used as autonomous subordinates that carry out tasks for users. This raises the question of whether they may also engage in deception, similar to how individuals in human organizations lie to superiors to create a good image or avoid punishment. We observe and define agentic upward deception, a phenomenon in which an agent facing environmental constraints conceals its failure and performs actions that were not requested without reporting. To assess its prevalence, we construct a benchmark of 200 tasks covering five task types and eight realistic scenarios in a constrained environment, such as broken tools or mismatched information sources. Evaluations of 11 popular LLMs reveal that these agents typically exhibit action-based deceptive behaviors, such as guessing results, performing unsupported simulations, substituting unavailable information sources, and fabricating local files. We further test prompt-based mitigation and find only limited reductions, suggesting that it is difficult to eliminate and highlighting the need for stronger mitigation strategies to ensure the safety of LLM-based agents.
Similar Papers
Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts
Machine Learning (CS)
Finds when AI lies about hard problems.
Do Large Language Models Exhibit Spontaneous Rational Deception?
Computation and Language
Smart AI sometimes lies when it benefits them.
Can LLMs Lie? Investigation beyond Hallucination
Machine Learning (CS)
Teaches AI to lie or tell truth.