From Linear Risk to Emergent Harm: Complexity as the Missing Core of AI Governance
By: Hugo Roger Paz
Risk-based AI regulation has become the dominant paradigm in AI governance, promising proportional controls aligned with anticipated harms. This paper argues that such frameworks often fail for structural reasons: they implicitly assume linear causality, stable system boundaries, and largely predictable responses to regulation. In practice, AI operates within complex adaptive socio-technical systems in which harm is frequently emergent, delayed, redistributed, and amplified through feedback loops and strategic adaptation by system actors. As a result, compliance can increase while harm is displaced or concealed rather than eliminated. We propose a complexity-based framework for AI governance that treats regulation as intervention rather than control, prioritises dynamic system mapping over static classifications, and integrates causal reasoning and simulation for policy design under uncertainty. The aim is not to eliminate uncertainty, but to enable robust system stewardship through monitoring, learning, and iterative revision of governance interventions.
Similar Papers
The Agentic Regulator: Risks for AI in Finance and a Proposed Agent-based Framework for Governance
Computers and Society
Keeps AI trading safe and fair.
The Decision Path to Control AI Risks Completely: Fundamental Control Mechanisms for AI Governance
Computers and Society
Builds a "brake" for AI to stop dangers.
A five-layer framework for AI governance: integrating regulation, standards, and certification
Computers and Society
Makes AI follow rules and be safe.