Differences in the Moral Foundations of Large Language Models
By: Peter Kirgis
Potential Business Impact:
Models show different values than people.
Large language models are increasingly being used in critical domains of politics, business, and education, but the nature of their normative ethical judgment remains opaque. Alignment research has, to date, not sufficiently utilized perspectives and insights from the field of moral psychology to inform training and evaluation of frontier models. I perform a synthetic experiment on a wide range of models from most major model providers using Jonathan Haidt's influential moral foundations theory (MFT) to elicit diverse value judgments from LLMs. Using multiple descriptive statistical approaches, I document the bias and variance of large language model responses relative to a human baseline in the original survey. My results suggest that models rely on different moral foundations from one another and from a nationally representative human baseline, and these differences increase as model capabilities increase. This work seeks to spur further analysis of LLMs using MFT, including finetuning of open-source models, and greater deliberation by policymakers on the importance of moral foundations for LLM alignment.
Similar Papers
Investigating Political and Demographic Associations in Large Language Models Through Moral Foundations Theory
Computation and Language
Shows if AI has political opinions.
Exploring Cultural Variations in Moral Judgments with Large Language Models
Computation and Language
Computers now understand different people's morals.
From Stability to Inconsistency: A Study of Moral Preferences in LLMs
Computers and Society
Shows how AI thinks about right and wrong.