Misalignment or misuse? The AGI alignment tradeoff
By: Max Hellrigel-Holderbaum, Leonard Dung
Potential Business Impact:
Makes smart AI safer from bad people.
Creating systems that are aligned with our goals is seen as a leading approach to create safe and beneficial AI in both leading AI companies and the academic field of AI safety. We defend the view that misaligned AGI - future, generally intelligent (robotic) AI agents - poses catastrophic risks. At the same time, we support the view that aligned AGI creates a substantial risk of catastrophic misuse by humans. While both risks are severe and stand in tension with one another, we show that - in principle - there is room for alignment approaches which do not increase misuse risk. We then investigate how the tradeoff between misalignment and misuse looks empirically for different technical approaches to AI alignment. Here, we argue that many current alignment techniques and foreseeable improvements thereof plausibly increase risks of catastrophic misuse. Since the impacts of AI depend on the social context, we close by discussing important social factors and suggest that to reduce the risk of a misuse catastrophe due to aligned AGI, techniques such as robustness, AI control methods and especially good governance seem essential.
Similar Papers
An Approach to Technical AGI Safety and Security
Artificial Intelligence
Keeps powerful AI from being used for bad.
Position Paper: Bounded Alignment: What (Not) To Expect From AGI Agents
Artificial Intelligence
Makes AI safer by studying animal brains.
Neurodivergent Influenceability as a Contingent Solution to the AI Alignment Problem
Artificial Intelligence
Makes AI work with humans, not against them.