Score: 0

Investigating the Effects of Cognitive Biases in Prompts on Large Language Model Outputs

Published: June 14, 2025 | arXiv ID: 2506.12338v1

By: Yan Sun, Stanley Kok

Potential Business Impact:

Fixes AI to stop believing fake information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper investigates the influence of cognitive biases on Large Language Models (LLMs) outputs. Cognitive biases, such as confirmation and availability biases, can distort user inputs through prompts, potentially leading to unfaithful and misleading outputs from LLMs. Using a systematic framework, our study introduces various cognitive biases into prompts and assesses their impact on LLM accuracy across multiple benchmark datasets, including general and financial Q&A scenarios. The results demonstrate that even subtle biases can significantly alter LLM answer choices, highlighting a critical need for bias-aware prompt design and mitigation strategy. Additionally, our attention weight analysis highlights how these biases can alter the internal decision-making processes of LLMs, affecting the attention distribution in ways that are associated with output inaccuracies. This research has implications for Al developers and users in enhancing the robustness and reliability of Al applications in diverse domains.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
18 pages

Category
Computer Science:
Computation and Language