Score: 0

Some economics of artificial super intelligence

Published: November 10, 2025 | arXiv ID: 2511.06613v1

By: Henry A. Thompson

Potential Business Impact:

Super-smart AI might not destroy us.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Conventional wisdom holds that a misaligned artificial superintelligence (ASI) will destroy humanity. But the problem of constraining a powerful agent is not new. I apply classic economic logic of interjurisdictional competition, all-encompassing interest, and trading on credit to the threat of misaligned ASI. Using a simple model, I show that an acquisitive ASI refrains from full predation under surprisingly weak conditions. When humans can flee to rivals, inter-ASI competition creates a market that tempers predation. When trapped by a monopolist ASI, its "encompassing interest" in humanity's output makes it a rational autocrat rather than a ravager. And when the ASI has no long-term stake, our ability to withhold future output incentivizes it to trade on credit rather than steal. In each extension, humanity's welfare progressively worsens. But each case suggests that catastrophe is not a foregone conclusion. The dismal science, ironically, offers an optimistic take on our superintelligent future.

Country of Origin
🇺🇸 United States

Page Count
25 pages

Category
Economics:
General Economics