Lightweight Defense Against Adversarial Attacks in Time Series Classification
By: Yi Han
Potential Business Impact:
Makes computer predictions safer from tricks.
As time series classification (TSC) gains prominence, ensuring robust TSC models against adversarial attacks is crucial. While adversarial defense is well-studied in Computer Vision (CV), the TSC field has primarily relied on adversarial training (AT), which is computationally expensive. In this paper, five data augmentation-based defense methods tailored for time series are developed, with the most computationally intensive method among them increasing the computational resources by only 14.07% compared to the original TSC model. Moreover, the deployment process for these methods is straightforward. By leveraging these advantages of our methods, we create two combined methods. One of these methods is an ensemble of all the proposed techniques, which not only provides better defense performance than PGD-based AT but also enhances the generalization ability of TSC models. Moreover, the computational resources required for our ensemble are less than one-third of those required for PGD-based AT. These methods advance robust TSC in data mining. Furthermore, as foundation models are increasingly explored for time series feature learning, our work provides insights into integrating data augmentation-based adversarial defense with large-scale pre-trained models in future research.
Similar Papers
AimTS: Augmented Series and Image Contrastive Learning for Time Series Classification
Machine Learning (CS)
Teaches computers to understand many types of time data.
Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis
Cryptography and Security
Makes fake data harder for computers to spot.
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Machine Learning (CS)
Makes computer programs easily tricked by bad data.