Score: 1

Large Language Models for Explainable Threat Intelligence

Published: November 7, 2025 | arXiv ID: 2511.05406v1

By: Tiago Dinis, Miguel Correia, Roger Tavares

Potential Business Impact:

Finds computer dangers and shows how it knows.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As cyber threats continue to grow in complexity, traditional security mechanisms struggle to keep up. Large language models (LLMs) offer significant potential in cybersecurity due to their advanced capabilities in text processing and generation. This paper explores the use of LLMs with retrieval-augmented generation (RAG) to obtain threat intelligence by combining real-time information retrieval with domain-specific data. The proposed system, RAGRecon, uses a LLM with RAG to answer questions about cybersecurity threats. Moreover, it makes this form of Artificial Intelligence (AI) explainable by generating and visually presenting to the user a knowledge graph for every reply. This increases the transparency and interpretability of the reasoning of the model, allowing analysts to better understand the connections made by the system based on the context recovered by the RAG system. We evaluated RAGRecon experimentally with two datasets and seven different LLMs and the responses matched the reference responses more than 91% of the time for the best combinations.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language