Tracing Moral Foundations in Large Language Models
By: Chenxiao Yu , Bowen Yi , Farzan Karimi-Malekabadi and more
Potential Business Impact:
Computers learn right from wrong like people.
Large language models (LLMs) often produce human-like moral judgments, but it is unclear whether this reflects an internal conceptual structure or superficial ``moral mimicry.'' Using Moral Foundations Theory (MFT) as an analytic framework, we study how moral foundations are encoded, organized, and expressed within two instruction-tuned LLMs: Llama-3.1-8B-Instruct and Qwen2.5-7B-Instruct. We employ a multi-level approach combining (i) layer-wise analysis of MFT concept representations and their alignment with human moral perceptions, (ii) pretrained sparse autoencoders (SAEs) over the residual stream to identify sparse features that support moral concepts, and (iii) causal steering interventions using dense MFT vectors and sparse SAE features. We find that both models represent and distinguish moral foundations in a structured, layer-dependent way that aligns with human judgments. At a finer scale, SAE features show clear semantic links to specific foundations, suggesting partially disentangled mechanisms within shared representations. Finally, steering along either dense vectors or sparse features produces predictable shifts in foundation-relevant behavior, demonstrating a causal connection between internal representations and moral outputs. Together, our results provide mechanistic evidence that moral concepts in LLMs are distributed, layered, and partly disentangled, suggesting that pluralistic moral structure can emerge as a latent pattern from the statistical regularities of language alone.
Similar Papers
Differences in the Moral Foundations of Large Language Models
Computers and Society
Models show different values than people.
Investigating Political and Demographic Associations in Large Language Models Through Moral Foundations Theory
Computation and Language
Shows if AI has political opinions.
Addressing Moral Uncertainty using Large Language Models for Ethical Decision-Making
Computers and Society
Teaches computers to make fair choices.