AI Consciousness and Existential Risk
By: Rufin VanRullen
Potential Business Impact:
Consciousness doesn't make AI dangerous, intelligence does.
In AI, the existential risk denotes the hypothetical threat posed by an artificial system that would possess both the capability and the objective, either directly or indirectly, to eradicate humanity. This issue is gaining prominence in scientific debate due to recent technical advancements and increased media coverage. In parallel, AI progress has sparked speculation and studies about the potential emergence of artificial consciousness. The two questions, AI consciousness and existential risk, are sometimes conflated, as if the former entailed the latter. Here, I explain that this view stems from a common confusion between consciousness and intelligence. Yet these two properties are empirically and theoretically distinct. Arguably, while intelligence is a direct predictor of an AI system's existential threat, consciousness is not. There are, however, certain incidental scenarios in which consciousness could influence existential risk, in either direction. Consciousness could be viewed as a means towards AI alignment, thereby lowering existential risk; or, it could be a precondition for reaching certain capabilities or levels of intelligence, and thus positively related to existential risk. Recognizing these distinctions can help AI safety researchers and public policymakers focus on the most pressing issues.
Similar Papers
Examining Popular Arguments Against AI Existential Risk: A Philosophical Analysis
Computers and Society
Helps us think clearly about AI dangers.
Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts
Computers and Society
Helps experts agree on AI dangers.
AI Safety for Everyone
Computers and Society
AI safety helps fix current AI problems now.