Large Language Models are overconfident and amplify human bias
By: Fengfei Sun , Ningke Li , Kailong Wang and more
Potential Business Impact:
Computers think they know more than they do.
Large language models (LLMs) are revolutionizing every aspect of society. They are increasingly used in problem-solving tasks to substitute human assessment and reasoning. LLMs are trained on what humans write and thus prone to learn human biases. One of the most widespread human biases is overconfidence. We examine whether LLMs inherit this bias. We automatically construct reasoning problems with known ground truths, and prompt LLMs to assess the confidence in their answers, closely following similar protocols in human experiments. We find that all five LLMs we study are overconfident: they overestimate the probability that their answer is correct between 20% and 60%. Humans have accuracy similar to the more advanced LLMs, but far lower overconfidence. Although humans and LLMs are similarly biased in questions which they are certain they answered correctly, a key difference emerges between them: LLM bias increases sharply relative to humans if they become less sure that their answers are correct. We also show that LLM input has ambiguous effects on human decision making: LLM input leads to an increase in the accuracy, but it more than doubles the extent of overconfidence in the answers.
Similar Papers
Do Language Models Mirror Human Confidence? Exploring Psychological Insights to Address Overconfidence in LLMs
Artificial Intelligence
Helps AI know how sure it is.
Large Language Models are Near-Optimal Decision-Makers with a Non-Human Learning Behavior
Artificial Intelligence
AI makes better choices than people in tests.
Large Language Newsvendor: Decision Biases and Cognitive Mechanisms
Artificial Intelligence
AI makes bad choices, like humans, but worse.