Probabilistic Aggregation and Targeted Embedding Optimization for Collective Moral Reasoning in Large Language Models
By: Chenchen Yuan , Zheyu Zhang , Shuo Yang and more
Potential Business Impact:
Makes AI make better, fairer choices together.
Large Language Models (LLMs) have shown impressive moral reasoning abilities. Yet they often diverge when confronted with complex, multi-factor moral dilemmas. To address these discrepancies, we propose a framework that synthesizes multiple LLMs' moral judgments into a collectively formulated moral judgment, realigning models that deviate significantly from this consensus. Our aggregation mechanism fuses continuous moral acceptability scores (beyond binary labels) into a collective probability, weighting contributions by model reliability. For misaligned models, a targeted embedding-optimization procedure fine-tunes token embeddings for moral philosophical theories, minimizing JS divergence to the consensus while preserving semantic integrity. Experiments on a large-scale social moral dilemma dataset show our approach builds robust consensus and improves individual model fidelity. These findings highlight the value of data-driven moral alignment across multiple models and its potential for safer, more consistent AI systems.
Similar Papers
Addressing Moral Uncertainty using Large Language Models for Ethical Decision-Making
Computers and Society
Teaches computers to make fair choices.
Advancing Automated Ethical Profiling in SE: a Zero-Shot Evaluation of LLM Reasoning
Software Engineering
Helps computers understand right from wrong.
Structured Moral Reasoning in Language Models: A Value-Grounded Evaluation Framework
Human-Computer Interaction
Teaches computers to make fair, good choices.