Score: 0

Automated Post-Incident Policy Gap Analysis via Threat-Informed Evidence Mapping using Large Language Models

Published: January 4, 2026 | arXiv ID: 2601.03287v1

By: Huan Lin Oh , Jay Yong Jun Jie , Mandy Lee Ling Siu and more

Potential Business Impact:

Finds computer security holes automatically.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Cybersecurity post-incident reviews are essential for identifying control failures and improving organisational resilience, yet they remain labour-intensive, time-consuming, and heavily reliant on expert judgment. This paper investigates whether Large Language Models (LLMs) can augment post-incident review workflows by autonomously analysing system evidence and identifying security policy gaps. We present a threat-informed, agentic framework that ingests log data, maps observed behaviours to the MITRE ATT&CK framework, and evaluates organisational security policies for adequacy and compliance. Using a simulated brute-force attack scenario against a Windows OpenSSH service (MITRE ATT&CK T1110), the system leverages GPT-4o for reasoning, LangGraph for multi-agent workflow orchestration, and LlamaIndex for traceable policy retrieval. Experimental results indicate that the LLM-based pipeline can interpret log-derived evidence, identify insufficient or missing policy controls, and generate actionable remediation recommendations with explicit evidence-to-policy traceability. Unlike prior work that treats log analysis and policy validation as isolated tasks, this study integrates both into a unified end-to-end proof-of-concept post-incident review framework. The findings suggest that LLM-assisted analysis has the potential to improve the efficiency, consistency, and auditability of post-incident evaluations, while highlighting the continued need for human oversight in high-stakes cybersecurity decision-making.

Country of Origin
🇸🇬 Singapore

Page Count
5 pages

Category
Computer Science:
Cryptography and Security