Score: 2

Differences in the Moral Foundations of Large Language Models

Published: November 14, 2025 | arXiv ID: 2511.11790v1

By: Peter Kirgis

BigTech Affiliations: Princeton University

Potential Business Impact:

Models show different values than people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models are increasingly being used in critical domains of politics, business, and education, but the nature of their normative ethical judgment remains opaque. Alignment research has, to date, not sufficiently utilized perspectives and insights from the field of moral psychology to inform training and evaluation of frontier models. I perform a synthetic experiment on a wide range of models from most major model providers using Jonathan Haidt's influential moral foundations theory (MFT) to elicit diverse value judgments from LLMs. Using multiple descriptive statistical approaches, I document the bias and variance of large language model responses relative to a human baseline in the original survey. My results suggest that models rely on different moral foundations from one another and from a nationally representative human baseline, and these differences increase as model capabilities increase. This work seeks to spur further analysis of LLMs using MFT, including finetuning of open-source models, and greater deliberation by policymakers on the importance of moral foundations for LLM alignment.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computers and Society