From One Attack Domain to Another: Contrastive Transfer Learning with Siamese Networks for APT Detection
By: Sidahmed Benabderrahmane, Talal Rahwan
Potential Business Impact:
Finds hidden computer attacks that change their tricks.
Advanced Persistent Threats (APT) pose a major cybersecurity challenge due to their stealth, persistence, and adaptability. Traditional machine learning detectors struggle with class imbalance, high dimensional features, and scarce real world traces. They often lack transferability-performing well in the training domain but degrading in novel attack scenarios. We propose a hybrid transfer framework that integrates Transfer Learning, Explainable AI (XAI), contrastive learning, and Siamese networks to improve cross-domain generalization. An attention-based autoencoder supports knowledge transfer across domains, while Shapley Additive exPlanations (SHAP) select stable, informative features to reduce dimensionality and computational cost. A Siamese encoder trained with a contrastive objective aligns source and target representations, increasing anomaly separability and mitigating feature drift. We evaluate on real-world traces from the DARPA Transparent Computing (TC) program and augment with synthetic attack scenarios to test robustness. Across source to target transfers, the approach delivers improved detection scores with classical and deep baselines, demonstrating a scalable, explainable, and transferable solution for APT detection.
Similar Papers
Ranking-Enhanced Anomaly Detection Using Active Learning-Assisted Attention Adversarial Dual AutoEncoders
Machine Learning (CS)
Finds hidden computer attacks with less work.
Adversarial Augmentation and Active Sampling for Robust Cyber Anomaly Detection
Cryptography and Security
Finds hidden computer attacks with less data.
Explainable AI for Enhancing IDS Against Advanced Persistent Kill Chain
Cryptography and Security
Finds sneaky computer attacks faster with fewer clues.