AI Survival Stories: a Taxonomic Analysis of AI Existential Risk
By: Herman Cappelen, Simon Goldstein, John Hawthorne
Since the release of ChatGPT, there has been a lot of debate about whether AI systems pose an existential risk to humanity. This paper develops a general framework for thinking about the existential risk of AI systems. We analyze a two premise argument that AI systems pose a threat to humanity. Premise one: AI systems will become extremely powerful. Premise two: if AI systems become extremely powerful, they will destroy humanity. We use these two premises to construct a taxonomy of survival stories, in which humanity survives into the far future. In each survival story, one of the two premises fails. Either scientific barriers prevent AI systems from becoming extremely powerful; or humanity bans research into AI systems, thereby preventing them from becoming extremely powerful; or extremely powerful AI systems do not destroy humanity, because their goals prevent them from doing so; or extremely powerful AI systems do not destroy humanity, because we can reliably detect and disable systems that have the goal of doing so. We argue that different survival stories face different challenges. We also argue that different survival stories motivate different responses to the threats from AI. Finally, we use our taxonomy to produce rough estimates of P(doom), the probability that humanity will be destroyed by AI.
Similar Papers
AI Consciousness and Existential Risk
Artificial Intelligence
Consciousness doesn't make AI dangerous, intelligence does.
Will Humanity Be Rendered Obsolete by AI?
Artificial Intelligence
AI could become too smart, ending humanity.
Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis
Computers and Society
Helps us think clearly about AI dangers.