Systems Security Foundations for Agentic Computing
By: Mihai Christodorescu , Earlence Fernandes , Ashish Hooda and more
Potential Business Impact:
Makes AI agents safer from hackers.
This paper articulates short- and long-term research problems in AI agent security and privacy, using the lens of computer systems security. This approach examines end-to-end security properties of entire systems, rather than AI models in isolation. While we recognize that hardening a single model is useful, it is important to realize that it is often insufficient. By way of an analogy, creating a model that is always helpful and harmless is akin to creating software that is always helpful and harmless. The collective experience of decades of cybersecurity research and practice shows that this is insufficient. Rather, constructing an informed and realistic attacker model before building a system, applying hard-earned lessons from software security, and continuous improvement of security posture is a tried-and-tested approach to securing real computer systems. A key goal is to examine where research challenges arise when applying traditional security principles in the context of AI agents. A secondary goal of this report is to distill these ideas for AI and ML practitioners and researchers. We discuss the challenges of applying security principles to agentic computing, present 11 case studies of real attacks on agentic systems, and define a series of new research problems specific to the security of agentic systems.
Similar Papers
A Safety and Security Framework for Real-World Agentic Systems
Machine Learning (CS)
Makes smart computer helpers safer to use.
Agentic AI Security: Threats, Defenses, Evaluation, and Open Challenges
Artificial Intelligence
Makes smart robots safer to use.
Formalizing the Safety, Security, and Functional Properties of Agentic AI Systems
Artificial Intelligence
Makes smart robots work together safely and reliably.