AI Alignment vs. AI Ethical Treatment: 10 Challenges
By: Adam Bradley, Bradford Saad
Potential Business Impact:
Making AI safe and kind is hard.
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications for AI development. Although the most obvious way to avoid the tension between alignment and ethical treatment would be to avoid creating AI systems that merit moral consideration, this option may be unrealistic and is perhaps fleeting. So, we conclude by offering some suggestions for other ways of mitigating mistreatment risks associated with alignment.
Similar Papers
Legal Alignment for Safe and Ethical AI
Computers and Society
Uses laws to make AI safe and fair.
Towards Integrated Alignment
Computers and Society
Makes AI understand and follow human wishes.
Ethics through the Facets of Artificial Intelligence
Computers and Society
Clears up confusion about AI for fairer use.