Limitations on Safe, Trusted, Artificial General Intelligence
By: Rina Panigrahy, Vatsal Sharan
Potential Business Impact:
Safe AI can't be as smart as humans.
Safety, trust and Artificial General Intelligence (AGI) are aspirational goals in artificial intelligence (AI) systems, and there are several informal interpretations of these notions. In this paper, we propose strict, mathematical definitions of safety, trust, and AGI, and demonstrate a fundamental incompatibility between them. We define safety of a system as the property that it never makes any false claims, trust as the assumption that the system is safe, and AGI as the property of an AI system always matching or exceeding human capability. Our core finding is that -- for our formal definitions of these notions -- a safe and trusted AI system cannot be an AGI system: for such a safe, trusted system there are task instances which are easily and provably solvable by a human but not by the system. We note that we consider strict mathematical definitions of safety and trust, and it is possible for real-world deployments to instead rely on alternate, practical interpretations of these notions. We show our results for program verification, planning, and graph reachability. Our proofs draw parallels to G\"odel's incompleteness theorems and Turing's proof of the undecidability of the halting problem, and can be regarded as interpretations of G\"odel's and Turing's results.
Similar Papers
A Framework for Inherently Safer AGI through Language-Mediated Active Inference
Artificial Intelligence
Makes smart computers safer by design.
An Approach to Technical AGI Safety and Security
Artificial Intelligence
Keeps powerful AI from being used for bad.
Governable AI: Provable Safety Under Extreme Threat Models
Artificial Intelligence
Keeps super-smart AI from causing disasters.