Reasoning Shapes Alignment: Investigating Cultural Alignment in Large Reasoning Models with Cultural Norms
By: Yuhang Wang, Yanxu Zhu, Jitao Sang
Potential Business Impact:
Teaches computers to understand different cultures.
The advanced reasoning capabilities of Large Reasoning Models enable them to thoroughly understand and apply safety policies through deliberate thought processes, thereby improving the models' safety. Beyond safety, these models must also be able to reflect the diverse range of human values across various cultures. This paper presents the Cultural Norm-based Cultural Alignment (CNCA) framework, which enables models to leverage their powerful reasoning ability to align with cultural norms. Specifically, we propose three methods to automatically mine cultural norms from limited survey data and explore ways to effectively utilize these norms for improving cultural alignment. Two alignment paradigms are examined: an in-context alignment method, where cultural norms are explicitly integrated into the user context, and a fine-tuning-based method, which internalizes norms through enhanced Chain-of-Thought training data. Comprehensive experiments demonstrate the effectiveness of these methods, highlighting that models with stronger reasoning capabilities benefit more from cultural norm mining and utilization. Our findings emphasize the potential for reasoning models to better reflect diverse human values through culturally informed alignment strategies.
Similar Papers
Identity-Aware Large Language Models require Cultural Reasoning
Computation and Language
Helps AI understand and respect different cultures.
Diverse Human Value Alignment for Large Language Models via Ethical Reasoning
Artificial Intelligence
Teaches AI to understand different cultures' rules.
Conversational Alignment with Artificial Intelligence in Context
Computers and Society
AI talks like people, understanding what you mean.