Score: 0

The BIG Argument for AI Safety Cases

Published: March 12, 2025 | arXiv ID: 2503.11705v3

By: Ibrahim Habli , Richard Hawkins , Colin Paterson and more

Potential Business Impact:

Makes AI safer by checking all its parts.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

We present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems. The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Firstly, it is balanced by addressing safety alongside other critical ethical issues such as privacy and equity, acknowledging complexities and trade-offs in the broader societal impact of AI. Secondly, it is integrated by bringing together the social, ethical and technical aspects of safety assurance in a way that is traceable and accountable. Thirdly, it is grounded in long-established safety norms and practices, such as being sensitive to context and maintaining risk proportionality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a systematic treatment of safety. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a wider AI safety case, approaching assurance from both technical and sociotechnical perspectives. Examples illustrating the use of the BIG argument are provided throughout the paper.

Country of Origin
🇬🇧 United Kingdom

Page Count
31 pages

Category
Computer Science:
Computers and Society