Score: 1

The Pluralistic Moral Gap: Understanding Judgment and Value Differences between Humans and Large Language Models

Published: July 23, 2025 | arXiv ID: 2507.17216v1

By: Giuseppe Russo , Debora Nozza , Paul Röttger and more

Potential Business Impact:

Helps AI give better moral advice like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

People increasingly rely on Large Language Models (LLMs) for moral advice, which may influence humans' decisions. Yet, little is known about how closely LLMs align with human moral judgments. To address this, we introduce the Moral Dilemma Dataset, a benchmark of 1,618 real-world moral dilemmas paired with a distribution of human moral judgments consisting of a binary evaluation and a free-text rationale. We treat this problem as a pluralistic distributional alignment task, comparing the distributions of LLM and human judgments across dilemmas. We find that models reproduce human judgments only under high consensus; alignment deteriorates sharply when human disagreement increases. In parallel, using a 60-value taxonomy built from 3,783 value expressions extracted from rationales, we show that LLMs rely on a narrower set of moral values than humans. These findings reveal a pluralistic moral gap: a mismatch in both the distribution and diversity of values expressed. To close this gap, we introduce Dynamic Moral Profiling (DMP), a Dirichlet-based sampling method that conditions model outputs on human-derived value profiles. DMP improves alignment by 64.3% and enhances value diversity, offering a step toward more pluralistic and human-aligned moral guidance from LLMs.

Country of Origin
🇨🇭 🇮🇹 Switzerland, Italy

Page Count
14 pages

Category
Computer Science:
Computation and Language