Score: 1

Predicting Biased Human Decision-Making with Large Language Models in Conversational Settings

Published: January 16, 2026 | arXiv ID: 2601.11049v1

By: Stephen Pilli, Vivek Nallur

Potential Business Impact:

Computers learn how people make biased choices.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We examine whether large language models (LLMs) can predict biased decision-making in conversational settings, and whether their predictions capture not only human cognitive biases but also how those effects change under cognitive load. In a pre-registered study (N = 1,648), participants completed six classic decision-making tasks via a chatbot with dialogues of varying complexity. Participants exhibited two well-documented cognitive biases: the Framing Effect and the Status Quo Bias. Increased dialogue complexity resulted in participants reporting higher mental demand. This increase in cognitive load selectively, but significantly, increased the effect of the biases, demonstrating the load-bias interaction. We then evaluated whether LLMs (GPT-4, GPT-5, and open-source models) could predict individual decisions given demographic information and prior dialogue. While results were mixed across choice problems, LLM predictions that incorporated dialogue context were significantly more accurate in several key scenarios. Importantly, their predictions reproduced the same bias patterns and load-bias interactions observed in humans. Across all models tested, the GPT-4 family consistently aligned with human behavior, outperforming GPT-5 and open-source models in both predictive accuracy and fidelity to human-like bias patterns. These findings advance our understanding of LLMs as tools for simulating human decision-making and inform the design of conversational agents that adapt to user biases.

Country of Origin
🇮🇪 Ireland

Repos / Data Links

Page Count
35 pages

Category
Computer Science:
Human-Computer Interaction