On Automating Security Policies with Contemporary LLMs
By: Pablo Fernández Saura , K. R. Jayaram , Vatche Isahagian and more
Potential Business Impact:
Automates computer defenses against online attacks.
The complexity of modern computing environments and the growing sophistication of cyber threats necessitate a more robust, adaptive, and automated approach to security enforcement. In this paper, we present a framework leveraging large language models (LLMs) for automating attack mitigation policy compliance through an innovative combination of in-context learning and retrieval-augmented generation (RAG). We begin by describing how our system collects and manages both tool and API specifications, storing them in a vector database to enable efficient retrieval of relevant information. We then detail the architectural pipeline that first decomposes high-level mitigation policies into discrete tasks and subsequently translates each task into a set of actionable API calls. Our empirical evaluation, conducted using publicly available CTI policies in STIXv2 format and Windows API documentation, demonstrates significant improvements in precision, recall, and F1-score when employing RAG compared to a non-RAG baseline.
Similar Papers
Large Language Models for Explainable Threat Intelligence
Computation and Language
Finds computer dangers and shows how it knows.
Adapting Large Language Models to Emerging Cybersecurity using Retrieval Augmented Generation
Cryptography and Security
Helps computers spot new cyber threats faster.
LLM-Assisted Proactive Threat Intelligence for Automated Reasoning
Cryptography and Security
Fights computer viruses faster with smart AI.