The BIG Argument for AI Safety Cases
By: Ibrahim Habli , Richard Hawkins , Colin Paterson and more
Potential Business Impact:
Makes AI safer by checking all its parts.
We present our Balanced, Integrated and Grounded (BIG) argument for assuring the safety of AI systems. The BIG argument adopts a whole-system approach to constructing a safety case for AI systems of varying capability, autonomy and criticality. Firstly, it is balanced by addressing safety alongside other critical ethical issues such as privacy and equity, acknowledging complexities and trade-offs in the broader societal impact of AI. Secondly, it is integrated by bringing together the social, ethical and technical aspects of safety assurance in a way that is traceable and accountable. Thirdly, it is grounded in long-established safety norms and practices, such as being sensitive to context and maintaining risk proportionality. Whether the AI capability is narrow and constrained or general-purpose and powered by a frontier or foundational model, the BIG argument insists on a systematic treatment of safety. Further, it places a particular focus on the novel hazardous behaviours emerging from the advanced capabilities of frontier AI models and the open contexts in which they are rapidly being deployed. These complex issues are considered within a wider AI safety case, approaching assurance from both technical and sociotechnical perspectives. Examples illustrating the use of the BIG argument are provided throughout the paper.
Similar Papers
Safety Cases: A Scalable Approach to Frontier AI Safety
Computers and Society
Helps AI developers prove their smart systems are safe.
An alignment safety case sketch based on debate
Artificial Intelligence
AI learns to be honest through AI debates.
Towards provable probabilistic safety for scalable embodied AI systems
Systems and Control
Makes robots safer by predicting rare problems.