Mechanisms to Verify International Agreements About AI Development
By: Aaron Scher, Lisa Thiergart
Potential Business Impact:
Checks if countries follow AI safety rules.
International agreements about AI development may be required to reduce catastrophic risks from advanced AI systems. However, agreements about such a high-stakes technology must be backed by verification mechanisms--processes or tools that give one party greater confidence that another is following the agreed-upon rules, typically by detecting violations. This report gives an overview of potential verification approaches for three example policy goals, aiming to demonstrate how countries could practically verify claims about each other's AI development and deployment. The focus is on international agreements and state-involved AI development, but these approaches could also be applied to domestic regulation of companies. While many of the ideal solutions for verification are not yet technologically feasible, we emphasize that increased access (e.g., physical inspections of data centers) can often substitute for these technical approaches. Therefore, we remain hopeful that significant political will could enable ambitious international coordination, with strong verification mechanisms, to reduce catastrophic AI risks.
Similar Papers
An International Agreement to Prevent the Premature Creation of Artificial Superintelligence
Computers and Society
Stops super-smart computers from being built too soon.
The Need for Verification in AI-Driven Scientific Discovery
Artificial Intelligence
AI helps scientists find and prove new ideas faster.
International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty
Computers and Society
Makes AI development safer by setting rules.