Catastrophic Liability: Managing Systemic Risks in Frontier AI Development
By: Aidan Kierans , Kaley Rittichier , Utku Sonsayar and more
Potential Business Impact:
Makes AI safer and developers responsible.
As artificial intelligence systems grow more capable and autonomous, frontier AI development poses potential systemic risks that could affect society at a massive scale. Current practices at many AI labs developing these systems lack sufficient transparency around safety measures, testing procedures, and governance structures. This opacity makes it challenging to verify safety claims or establish appropriate liability when harm occurs. Drawing on liability frameworks from nuclear energy, aviation software, cybersecurity, and healthcare, we propose a comprehensive approach to safety documentation and accountability in frontier AI development.
Similar Papers
Reinsuring AI: Energy, Agriculture, Finance & Medicine as Precedents for Scalable Governance of Frontier Artificial Intelligence
Computers and Society
Keeps powerful AI safe and responsible.
AI, Digital Platforms, and the New Systemic Risk
Computers and Society
Helps laws better protect us from AI dangers.
Evaluating AI Companies' Frontier Safety Frameworks: Methodology and Results
Computers and Society
Helps AI companies build safer, more responsible systems.