Some economics of artificial super intelligence
By: Henry A. Thompson
Potential Business Impact:
Super-smart AI might not destroy us.
Conventional wisdom holds that a misaligned artificial superintelligence (ASI) will destroy humanity. But the problem of constraining a powerful agent is not new. I apply classic economic logic of interjurisdictional competition, all-encompassing interest, and trading on credit to the threat of misaligned ASI. Using a simple model, I show that an acquisitive ASI refrains from full predation under surprisingly weak conditions. When humans can flee to rivals, inter-ASI competition creates a market that tempers predation. When trapped by a monopolist ASI, its "encompassing interest" in humanity's output makes it a rational autocrat rather than a ravager. And when the ASI has no long-term stake, our ability to withhold future output incentivizes it to trade on credit rather than steal. In each extension, humanity's welfare progressively worsens. But each case suggests that catastrophe is not a foregone conclusion. The dismal science, ironically, offers an optimistic take on our superintelligent future.
Similar Papers
An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Computers and Society
Stops super-smart computers from being built too soon.
The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI
General Economics
Protects humans from dangerous super-smart AI.
Super Co-alignment of Human and AI for Sustainable Symbiotic Society
Artificial Intelligence
Makes super-smart AI learn good values with us.