An Approach to Technical AGI Safety and Security
By: Rohin Shah , Alex Irpan , Alexander Matt Turner and more
Potential Business Impact:
Keeps powerful AI from being used for bad.
Artificial General Intelligence (AGI) promises transformative benefits but also presents significant risks. We develop an approach to address the risk of harms consequential enough to significantly harm humanity. We identify four areas of risk: misuse, misalignment, mistakes, and structural risks. Of these, we focus on technical approaches to misuse and misalignment. For misuse, our strategy aims to prevent threat actors from accessing dangerous capabilities, by proactively identifying dangerous capabilities, and implementing robust security, access restrictions, monitoring, and model safety mitigations. To address misalignment, we outline two lines of defense. First, model-level mitigations such as amplified oversight and robust training can help to build an aligned model. Second, system-level security measures such as monitoring and access control can mitigate harm even if the model is misaligned. Techniques from interpretability, uncertainty estimation, and safer design patterns can enhance the effectiveness of these mitigations. Finally, we briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
Similar Papers
Misalignment or misuse? The AGI alignment tradeoff
Computers and Society
Makes smart AI safer from bad people.
Position Paper: Bounded Alignment: What (Not) To Expect From AGI Agents
Artificial Intelligence
Makes AI safer by studying animal brains.
Limitations on Safe, Trusted, Artificial General Intelligence
Machine Learning (CS)
Safe AI can't be as smart as humans.