AnomalyExplainer Explainable AI for LLM-based anomaly detection using BERTViz and Captum
By: Prasasthy Balasubramanian , Dumindu Kankanamge , Ekaterina Gilman and more
Potential Business Impact:
Helps computers find and explain online dangers faster.
Conversational AI and Large Language Models (LLMs) have become powerful tools across domains, including cybersecurity, where they help detect threats early and improve response times. However, challenges such as false positives and complex model management still limit trust. Although Explainable AI (XAI) aims to make AI decisions more transparent, many security analysts remain uncertain about its usefulness. This study presents a framework that detects anomalies and provides high-quality explanations through visual tools BERTViz and Captum, combined with natural language reports based on attention outputs. This reduces manual effort and speeds up remediation. Our comparative analysis showed that RoBERTa offers high accuracy (99.6 %) and strong anomaly detection, outperforming Falcon-7B and DeBERTa, as well as exhibiting better flexibility than large-scale Mistral-7B on the HDFS dataset from LogHub. User feedback confirms the chatbot's ease of use and improved understanding of anomalies, demonstrating the ability of the developed framework to strengthen cybersecurity workflows.
Similar Papers
A Grey-box Text Attack Framework using Explainable AI
Computation and Language
Makes AI mistakes hidden from humans.
Interpretable Ransomware Detection Using Hybrid Large Language Models: A Comparative Analysis of BERT, RoBERTa, and DeBERTa Through LIME and SHAP
Cryptography and Security
Helps computers spot computer viruses faster.
Enhancing IoMT Security with Explainable Machine Learning: A Case Study on the CICIOMT2024 Dataset
Cryptography and Security
Shows why computers flag medical device attacks.