Distributional AGI Safety
By: Nenad Tomašev , Matija Franklin , Julian Jacobs and more
AI safety and alignment research has predominantly been focused on methods for safeguarding individual AI systems, resting on the assumption of an eventual emergence of a monolithic Artificial General Intelligence (AGI). The alternative AGI emergence hypothesis, where general capability levels are first manifested through coordination in groups of sub-AGI individual agents with complementary skills and affordances, has received far less attention. Here we argue that this patchwork AGI hypothesis needs to be given serious consideration, and should inform the development of corresponding safeguards and mitigations. The rapid deployment of advanced AI agents with tool-use capabilities and the ability to communicate and coordinate makes this an urgent safety consideration. We therefore propose a framework for distributional AGI safety that moves beyond evaluating and aligning individual agents. This framework centers on the design and implementation of virtual agentic sandbox economies (impermeable or semi-permeable), where agent-to-agent transactions are governed by robust market mechanisms, coupled with appropriate auditability, reputation management, and oversight to mitigate collective risks.
Similar Papers
Position Paper: Bounded Alignment: What (Not) To Expect From AGI Agents
Artificial Intelligence
Makes AI safer by studying animal brains.
An Approach to Technical AGI Safety and Security
Artificial Intelligence
Keeps powerful AI from being used for bad.
A Framework for Inherently Safer AGI through Language-Mediated Active Inference
Artificial Intelligence
Makes smart computers safer by design.