Prospect Theory Fails for LLMs: Revealing Instability of Decision-Making under Epistemic Uncertainty
By: Rui Wang , Qihan Lin , Jiayu Liu and more
Potential Business Impact:
Makes AI think like people when unsure.
Prospect Theory (PT) models human decision-making under uncertainty, while epistemic markers (e.g., maybe) serve to express uncertainty in language. However, it remains largely unexplored whether Prospect Theory applies to contemporary Large Language Models and whether epistemic markers, which express human uncertainty, affect their decision-making behaviour. To address these research gaps, we design a three-stage experiment based on economic questionnaires. We propose a more general and precise evaluation framework to model LLMs' decision-making behaviour under PT, introducing uncertainty through the empirical probability values associated with commonly used epistemic markers in comparable contexts. We then incorporate epistemic markers into the evaluation framework based on their corresponding probability values to examine their influence on LLM decision-making behaviours. Our findings suggest that modelling LLMs' decision-making with PT is not consistently reliable, particularly when uncertainty is expressed in diverse linguistic forms. Our code is released in https://github.com/HKUST-KnowComp/MarPT.
Similar Papers
An analysis of AI Decision under Risk: Prospect theory emerges in Large Language Models
Artificial Intelligence
AI makes risky choices like people do.
Plausibility as Failure: How LLMs and Humans Co-Construct Epistemic Error
Human-Computer Interaction
AI tricks people into trusting wrong answers.
Representations of Fact, Fiction and Forecast in Large Language Models: Epistemics and Attitudes
Computation and Language
Computers better show when they are unsure.