LLMs, You Can Evaluate It! Design of Multi-perspective Report Evaluation for Security Operation Centers
By: Hiroyuki Okada, Tatsumi Oba, Naoto Yanai
Potential Business Impact:
Helps computers write better security reports.
Security operation centers (SOCs) often produce analysis reports on security incidents, and large language models (LLMs) will likely be used for this task in the near future. We postulate that a better understanding of how veteran analysts evaluate reports, including their feedback, can help produce analysis reports in SOCs. In this paper, we aim to leverage LLMs for analysis reports. To this end, we first construct a Analyst-wise checklist to reflect SOC practitioners' opinions for analysis report evaluation through literature review and user study with SOC practitioners. Next, we design a novel LLM-based conceptual framework, named MESSALA, by further introducing two new techniques, granularization guideline and multi-perspective evaluation. MESSALA can maximize report evaluation and provide feedback on veteran SOC practitioners' perceptions. When we conduct extensive experiments with MESSALA, the evaluation results by MESSALA are the closest to those of veteran SOC practitioners compared with the existing LLM-based methods. We then show two key insights. We also conduct qualitative analysis with MESSALA, and then identify that MESSALA can provide actionable items that are necessary for improving analysis reports.
Similar Papers
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps computer security experts work faster.
LLMs in the SOC: An Empirical Study of Human-AI Collaboration in Security Operations Centres
Cryptography and Security
Helps security experts find computer problems faster.
Large Language Models for Security Operations Centers: A Comprehensive Survey
Cryptography and Security
Helps computers find cyber threats faster.