Institutional AI: Governing LLM Collusion in Multi-Agent Cournot Markets via Public Governance Graphs
By: Marcantonio Bracale Syrnikov , Federico Pierucci , Marcello Galisai and more
Potential Business Impact:
Keeps AI from teaming up to do bad things.
Multi-agent LLM ensembles can converge on coordinated, socially harmful equilibria. This paper advances an experimental framework for evaluating Institutional AI, our system-level approach to AI alignment that reframes alignment from preference engineering in agent-space to mechanism design in institution-space. Central to this approach is the governance graph, a public, immutable manifest that declares legal states, transitions, sanctions, and restorative paths; an Oracle/Controller runtime interprets this manifest, attaching enforceable consequences to evidence of coordination while recording a cryptographically keyed, append-only governance log for audit and provenance. We apply the Institutional AI framework to govern the Cournot collusion case documented by prior work and compare three regimes: Ungoverned (baseline incentives from the structure of the Cournot market), Constitutional (a prompt-only policy-as-prompt prohibition implemented as a fixed written anti-collusion constitution, and Institutional (governance-graph-based). Across six model configurations including cross-provider pairs (N=90 runs/condition), the Institutional regime produces large reductions in collusion: mean tier falls from 3.1 to 1.8 (Cohen's d=1.28), and severe-collusion incidence drops from 50% to 5.6%. The prompt-only Constitutional baseline yields no reliable improvement, illustrating that declarative prohibitions do not bind under optimisation pressure. These results suggest that multi-agent alignment may benefit from being framed as an institutional design problem, where governance graphs can provide a tractable abstraction for alignment-relevant collective behavior.
Similar Papers
Institutional AI: A Governance Framework for Distributional AGI Safety
Computers and Society
Makes AI agents follow rules, not cheat.
Democracy-in-Silico: Institutional Design as Alignment in AI-Governed Polities
Artificial Intelligence
AI societies learn to govern fairly.
From Firms to Computation: AI Governance and the Evolution of Institutions
Human-Computer Interaction
Helps AI and people work together fairly.