Dual Explanations via Subgraph Matching for Malware Detection
By: Hossein Shokouhinejad , Roozbeh Razavi-Far , Griffin Higgins and more
Potential Business Impact:
Helps computers spot bad software by its actions.
Interpretable malware detection is crucial for understanding harmful behaviors and building trust in automated security systems. Traditional explainable methods for Graph Neural Networks (GNNs) often highlight important regions within a graph but fail to associate them with known benign or malicious behavioral patterns. This limitation reduces their utility in security contexts, where alignment with verified prototypes is essential. In this work, we introduce a novel dual prototype-driven explainable framework that interprets GNN-based malware detection decisions. This dual explainable framework integrates a base explainer (a state-of-the-art explainer) with a novel second-level explainer which is designed by subgraph matching technique, called SubMatch explainer. The proposed explainer assigns interpretable scores to nodes based on their association with matched subgraphs, offering a fine-grained distinction between benign and malicious regions. This prototype-guided scoring mechanism enables more interpretable, behavior-aligned explanations. Experimental results demonstrate that our method preserves high detection performance while significantly improving interpretability in malware analysis.
Similar Papers
Recent Advances in Malware Detection: Graph Learning and Explainability
Cryptography and Security
Finds computer viruses by how they connect.
How Explanations Leak the Decision Logic: Stealing Graph Neural Networks via Explanation Alignment
Machine Learning (CS)
Steals AI's thinking by using its explanations.
Explainable Ensemble Learning for Graph-Based Malware Detection
Cryptography and Security
Finds computer viruses and explains why.