Score: 4

Cyber-Zero: Training Cybersecurity Agents without Runtime

Published: July 29, 2025 | arXiv ID: 2508.00910v2

By: Terry Yue Zhuo , Dingmin Wang , Hantian Ding and more

BigTech Affiliations: Amazon

Potential Business Impact:

Teaches computers to fight cyber threats without real tests.

Large Language Models (LLMs) have achieved remarkable success in software engineering tasks when trained with executable runtime environments, particularly in resolving GitHub issues. However, such runtime environments are often unavailable in other domains, especially cybersecurity, where challenge configurations and execution contexts are ephemeral or restricted. We present Cyber-Zero, the first runtime-free framework for synthesizing high-quality agent trajectories to train cybersecurity LLMs. Cyber-Zero leverages publicly available CTF writeups and employs persona-driven LLM simulation to reverse-engineer runtime behaviors and generate realistic, long-horizon interaction sequences without actual environments. Using trajectories synthesized by Cyber-Zero, we train LLM-based agents that achieve up to 13.1% absolute performance gains over baseline models on three prominent CTF benchmarks: InterCode-CTF, NYU CTF Bench, and Cybench. Our best model, Cyber-Zero-32B, establishes new state-of-the-art performance among open-weight models, matching the capabilities of proprietary systems like DeepSeek-V3-0324 and Claude-3.5-Sonnet while offering superior cost-effectiveness, and demonstrating that runtime-free trajectory synthesis can effectively democratize the development of state-of-the-art cybersecurity agents.

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡¦πŸ‡Ί Australia, United States

Repos / Data Links

Page Count
130 pages

Category
Computer Science:
Cryptography and Security