ASTRA: Autonomous Spatial-Temporal Red-teaming for AI Software Assistants
By: Xiangzhe Xu , Guangyu Shen , Zian Su and more
Potential Business Impact:
Finds hidden mistakes in AI-written code.
AI coding assistants like GitHub Copilot are rapidly transforming software development, but their safety remains deeply uncertain-especially in high-stakes domains like cybersecurity. Current red-teaming tools often rely on fixed benchmarks or unrealistic prompts, missing many real-world vulnerabilities. We present ASTRA, an automated agent system designed to systematically uncover safety flaws in AI-driven code generation and security guidance systems. ASTRA works in three stages: (1) it builds structured domain-specific knowledge graphs that model complex software tasks and known weaknesses; (2) it performs online vulnerability exploration of each target model by adaptively probing both its input space, i.e., the spatial exploration, and its reasoning processes, i.e., the temporal exploration, guided by the knowledge graphs; and (3) it generates high-quality violation-inducing cases to improve model alignment. Unlike prior methods, ASTRA focuses on realistic inputs-requests that developers might actually ask-and uses both offline abstraction guided domain modeling and online domain knowledge graph adaptation to surface corner-case vulnerabilities. Across two major evaluation domains, ASTRA finds 11-66% more issues than existing techniques and produces test cases that lead to 17% more effective alignment training, showing its practical value for building safer AI systems.
Similar Papers
ASTRA: Agentic Steerability and Risk Assessment Framework
Cryptography and Security
Makes AI agents follow rules to prevent harm.
AI Agentic Vulnerability Injection And Transformation with Optimized Reasoning
Cryptography and Security
Creates realistic bugs for training security AI.
GitHub's Copilot Code Review: Can AI Spot Security Flaws Before You Commit?
Software Engineering
AI code checker misses big security problems.