Score: 0

Illuminating the Black Box: Real-Time Monitoring of Backdoor Unlearning in CNNs via Explainable AI

Published: November 26, 2025 | arXiv ID: 2511.21291v1

By: Tien Dat Hoang

Potential Business Impact:

Cleans computer brains of hidden bad instructions.

Business Areas:
Image Recognition Data and Analytics, Software

Backdoor attacks pose severe security threats to deep neural networks by embedding malicious triggers that force misclassification. While machine unlearning techniques can remove backdoor behaviors, current methods lack transparency and real-time interpretability. This paper introduces a novel framework that integrates Gradient-weighted Class Activation Mapping (Grad-CAM) into the unlearning process to provide real-time monitoring and explainability. We propose the Trigger Attention Ratio (TAR) metric to quantitatively measure the model's attention shift from trigger patterns to legitimate object features. Our balanced unlearning strategy combines gradient ascent on backdoor samples, Elastic Weight Consolidation (EWC) for catastrophic forgetting prevention, and a recovery phase for clean accuracy restoration. Experiments on CIFAR-10 with BadNets attacks demonstrate that our approach reduces Attack Success Rate (ASR) from 96.51% to 5.52% while retaining 99.48% of clean accuracy (82.06%), achieving a 94.28% ASR reduction. The integration of explainable AI enables transparent, observable, and verifiable backdoor removal.

Page Count
5 pages

Category
Computer Science:
Cryptography and Security