On the Variational Costs of Changing Our Minds
By: David Hyland, Mahault Albarracin
Potential Business Impact:
Explains why we stick to beliefs, even when wrong.
The human mind is capable of extraordinary achievements, yet it often appears to work against itself. It actively defends its cherished beliefs even in the face of contradictory evidence, conveniently interprets information to conform to desired narratives, and selectively searches for or avoids information to suit its various purposes. Despite these behaviours deviating from common normative standards for belief updating, we argue that such 'biases' are not inherently cognitive flaws, but rather an adaptive response to the significant pragmatic and cognitive costs associated with revising one's beliefs. This paper introduces a formal framework that aims to model the influence of these costs on our belief updating mechanisms. We treat belief updating as a motivated variational decision, where agents weigh the perceived 'utility' of a belief against the informational cost required to adopt a new belief state, quantified by the Kullback-Leibler divergence from the prior to the variational posterior. We perform computational experiments to demonstrate that simple instantiations of this resource-rational model can be used to qualitatively emulate commonplace human behaviours, including confirmation bias and attitude polarisation. In doing so, we suggest that this framework makes steps toward a more holistic account of the motivated Bayesian mechanics of belief change and provides practical insights for predicting, compensating for, and correcting deviations from desired belief updating processes.
Similar Papers
How Do People Revise Inconsistent Beliefs? Examining Belief Revision in Humans with User Studies
Artificial Intelligence
Helps computers learn how people change their minds.
Bias or Optimality? Disentangling Bayesian Inference and Learning Biases in Human Decision-Making
Artificial Intelligence
Shows how brains learn from choices, not just bias.
How Overconfidence in Initial Choices and Underconfidence Under Criticism Modulate Change of Mind in Large Language Models
Machine Learning (CS)
Makes AI stick to its first answer.