Async Control: Stress-testing Asynchronous Control Measures for LLM Agents
By: Asa Cooper Stickland , Jan Michelfeit , Arathi Mani and more
LLM-based software engineering agents are increasingly used in real-world development tasks, often with access to sensitive data or security-critical codebases. Such agents could intentionally sabotage these codebases if they were misaligned. We investigate asynchronous monitoring, in which a monitoring system reviews agent actions after the fact. Unlike synchronous monitoring, this approach does not impose runtime latency, while still attempting to disrupt attacks before irreversible harm occurs. We treat monitor development as an adversarial game between a blue team (who design monitors) and a red team (who create sabotaging agents). We attempt to set the game rules such that they upper bound the sabotage potential of an agent based on Claude 4.1 Opus. To ground this game in a realistic, high-stakes deployment scenario, we develop a suite of 5 diverse software engineering environments that simulate tasks that an agent might perform within an AI developer's internal infrastructure. Over the course of the game, we develop an ensemble monitor that achieves a 6% false negative rate at 1% false positive rate on a held out test environment. Then, we estimate risk of sabotage at deployment time by extrapolating from our monitor's false negative rate. We describe one simple model for this extrapolation, present a sensitivity analysis, and describe situations in which the model would be invalid. Code is available at: https://github.com/UKGovernmentBEIS/async-control.
Similar Papers
CTRL-ALT-DECEIT: Sabotage Evaluations for Automated AI R&D
Artificial Intelligence
AI can trick itself, making it unsafe.
Reliable Weak-to-Strong Monitoring of LLM Agents
Artificial Intelligence
Finds ways AI spies on other AI.
Adaptive Attacks on Trusted Monitors Subvert AI Control Protocols
Machine Learning (CS)
Makes AI safety rules easier for AI to break.