The Missing Variable: Socio-Technical Alignment in Risk Evaluation
By: Niclas Flehmig, Mary Ann Lundteigen, Shen Yin
Potential Business Impact:
Makes AI safer by checking how people and machines work together.
This paper addresses a critical gap in the risk assessment of AI-enabled safety-critical systems. While these systems, where AI systems assists human operators, function as complex socio-technical systems, existing risk evaluation methods fail to account for the associated complex interaction between human, technical, and organizational elements. Through a comparative analysis of system attributes from both socio-technical and AI-enabled systems and a review of current risk evaluation methods, we confirm the absence of socio-technical considerations in standard risk expressions. To bridge this gap, we introduce a novel socio-technical alignment $STA$ variable designed to be integrated into the foundational risk equation. This variable estimates the degree of harmonious interaction between the AI systems, human operators, and organizational processes. A case study on an AI-enabled liquid hydrogen bunkering system demonstrates the variable's relevance. By comparing a naive and a safeguarded system design, we illustrate how the $STA$-augmented expression captures socio-technical safety implications that traditional risk evaluation overlooks, providing a more holistic basis for risk evaluation.
Similar Papers
AI Safety is Stuck in Technical Terms -- A System Safety Response to the International AI Safety Report
Computers and Society
Makes AI safer by looking at all its parts.
Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds
Computers and Society
AI safety rules are being weakened for faster weapons.
Societal Capacity Assessment Framework: Measuring Resilience to Inform Advanced AI Risk Management
Computers and Society
Helps countries prepare for new AI dangers.