Score: 2

Learning to Diagnose and Correct Moral Errors: Towards Enhancing Moral Sensitivity in Large Language Models

Published: January 6, 2026 | arXiv ID: 2601.03079v1

By: Bocheng Chen , Han Zi , Xi Chen and more

Potential Business Impact:

Teaches computers to spot and fix bad ideas.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Moral sensitivity is fundamental to human moral competence, as it guides individuals in regulating everyday behavior. Although many approaches seek to align large language models (LLMs) with human moral values, how to enable them morally sensitive has been extremely challenging. In this paper, we take a step toward answering the question: how can we enhance moral sensitivity in LLMs? Specifically, we propose two pragmatic inference methods that faciliate LLMs to diagnose morally benign and hazardous input and correct moral errors, whereby enhancing LLMs' moral sensitivity. A central strength of our pragmatic inference methods is their unified perspective: instead of modeling moral discourses across semantically diverse and complex surface forms, they offer a principled perspective for designing pragmatic inference procedures grounded in their inferential loads. Empirical evidence demonstrates that our pragmatic methods can enhance moral sensitivity in LLMs and achieves strong performance on representative morality-relevant benchmarks.

Country of Origin
🇸🇬 🇺🇸 United States, Singapore

Repos / Data Links

Page Count
14 pages

Category
Computer Science:
Computation and Language