Rethinking How AI Embeds and Adapts to Human Values: Challenges and Opportunities
By: Sz-Ting Tzeng, Frank Dignum
Potential Business Impact:
AI learns to change its mind with our values.
The concepts of ``human-centered AI'' and ``value-based decision'' have gained significant attention in both research and industry. However, many critical aspects remain underexplored and require further investigation. In particular, there is a need to understand how systems incorporate human values, how humans can identify these values within systems, and how to minimize the risks of harm or unintended consequences. In this paper, we highlight the need to rethink how we frame value alignment and assert that value alignment should move beyond static and singular conceptions of values. We argue that AI systems should implement long-term reasoning and remain adaptable to evolving values. Furthermore, value alignment requires more theories to address the full spectrum of human values. Since values often vary among individuals or groups, multi-agent systems provide the right framework for navigating pluralism, conflict, and inter-agent reasoning about values. We identify the challenges associated with value alignment and indicate directions for advancing value alignment research. In addition, we broadly discuss diverse perspectives of value alignment, from design methodologies to practical applications.
Similar Papers
Understanding the Process of Human-AI Value Alignment
Computers and Society
Helps AI understand and follow human values.
Diverse Human Value Alignment for Large Language Models via Ethical Reasoning
Artificial Intelligence
Teaches AI to understand different cultures' rules.
Ethics2vec: aligning automatic agents and human preferences
Artificial Intelligence
Teaches AI to understand and follow human values.