Enhancing IoMT Security with Explainable Machine Learning: A Case Study on the CICIOMT2024 Dataset
By: Mohammed Yacoubi, Omar Moussaoui, C. Drocourt
Potential Business Impact:
Shows why computers flag medical device attacks.
Explainable Artificial Intelligence (XAI) enhances the transparency and interpretability of AI models, addressing their inherent opacity. In cybersecurity, particularly within the Internet of Medical Things (IoMT), the black-box nature of AI-driven threat detection poses a significant challenge. Cybersecurity professionals must not only detect attacks but also understand the reasoning behind AI decisions to ensure trust and accountability. The rapid increase in cyberattacks targeting connected medical devices threatens patient safety and data privacy, necessitating advanced AI-driven solutions. This study compares two ensemble learning techniques, bagging and boosting, for cyber-attack classification in IoMT environments. We selected Random Forest for bagging and CatBoost for boosting. Random Forest helps reduce variance, while CatBoost improves bias by combining weak classifiers into a strong ensemble model, making them effective for detecting sophisticated attacks. However, their complexity often reduces transparency, making it difficult for cybersecurity professionals to interpret and trust their decisions. To address this issue, we apply XAI models to generate local and global explanations, providing insights into AI decision-making. Using techniques like SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations), we highlight feature importance to help stakeholders understand the key factors driving cyber threat detection.
Similar Papers
Financial Fraud Detection Using Explainable AI and Stacking Ensemble Methods
Machine Learning (CS)
Makes fraud detection easier to understand.
A Comparative Analysis of Ensemble-Based Machine Learning Approaches with Explainable AI for Multi-Class Intrusion Detection in Drone Networks
Cryptography and Security
Finds drone hackers by watching their signals.
Explainable but Vulnerable: Adversarial Attacks on XAI Explanation in Cybersecurity Applications
Cryptography and Security
Makes AI explanations harder to trick.