Position Paper: Bounded Alignment: What (Not) To Expect From AGI Agents
By: Ali A. Minai
Potential Business Impact:
Makes AI safer by studying animal brains.
The issues of AI risk and AI safety are becoming critical as the prospect of artificial general intelligence (AGI) looms larger. The emergence of extremely large and capable generative models has led to alarming predictions and created a stir from boardrooms to legislatures. As a result, AI alignment has emerged as one of the most important areas in AI research. The goal of this position paper is to argue that the currently dominant vision of AGI in the AI and machine learning (AI/ML) community needs to evolve, and that expectations and metrics for its safety must be informed much more by our understanding of the only existing instance of general intelligence, i.e., the intelligence found in animals, and especially in humans. This change in perspective will lead to a more realistic view of the technology, and allow for better policy decisions.
Similar Papers
Misalignment or misuse? The AGI alignment tradeoff
Computers and Society
Makes smart AI safer from bad people.
An Approach to Technical AGI Safety and Security
Artificial Intelligence
Keeps powerful AI from being used for bad.
Disentangling AI Alignment: A Structured Taxonomy Beyond Safety and Ethics
Computers and Society
Helps AI follow rules and do good things.