Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast
By: Nghi Truong, Phanish Puranam, Özgecan Koçak
Potential Business Impact:
Helps understand why people disagree about AI dangers.
The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.
Similar Papers
The AI Risk Spectrum: From Dangerous Capabilities to Existential Threats
Computers and Society
Helps us understand dangers of smart computers.
From Catastrophic to Concrete: Reframing AI Risk Communication for Public Mobilization
Computers and Society
Focuses AI worries on jobs and kids, not doom.
Three Lenses on the AI Revolution: Risk, Transformation, Continuity
Computers and Society
AI changes jobs, but we can control its risks.