Superintelligence Strategy: Expert Version
By: Dan Hendrycks, Eric Schmidt, Alexandr Wang
Potential Business Impact:
Keeps AI from starting wars between countries.
Rapid advances in AI are beginning to reshape national security. Destabilizing AI developments could rupture the balance of power and raise the odds of great-power conflict, while widespread proliferation of capable AI hackers and virologists would lower barriers for rogue actors to cause catastrophe. Superintelligence -- AI vastly better than humans at nearly all cognitive tasks -- is now anticipated by AI researchers. Just as nations once developed nuclear strategies to secure their survival, we now need a coherent superintelligence strategy to navigate a new period of transformative change. We introduce the concept of Mutual Assured AI Malfunction (MAIM): a deterrence regime resembling nuclear mutual assured destruction (MAD) where any state's aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals. Given the relative ease of sabotaging a destabilizing AI project -- through interventions ranging from covert cyberattacks to potential kinetic strikes on datacenters -- MAIM already describes the strategic picture AI superpowers find themselves in. Alongside this, states can increase their competitiveness by bolstering their economies and militaries through AI, and they can engage in nonproliferation to rogue actors to keep weaponizable AI capabilities out of their hands. Taken together, the three-part framework of deterrence, nonproliferation, and competitiveness outlines a robust strategy to superintelligence in the years ahead.
Similar Papers
An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Computers and Society
Stops super-smart computers from being built too soon.
Governing Automated Strategic Intelligence
Artificial Intelligence
AI spies can now understand all secret information.
Preparing for the Intelligence Explosion
Computers and Society
AI speeds up progress, creating big future choices.