Towards Imperceptible Adversarial Attacks for Time Series Classification with Local Perturbations and Frequency Analysis
By: Wenwei Gu , Renyi Zhong , Jianping Zhang and more
Potential Business Impact:
Makes fake data harder for computers to spot.
Adversarial attacks in time series classification (TSC) models have recently gained attention due to their potential to compromise model robustness. Imperceptibility is crucial, as adversarial examples detected by the human vision system (HVS) can render attacks ineffective. Many existing methods fail to produce high-quality imperceptible examples, often generating perturbations with more perceptible low-frequency components, like square waves, and global perturbations that reduce stealthiness. This paper aims to improve the imperceptibility of adversarial attacks on TSC models by addressing frequency components and time series locality. We propose the Shapelet-based Frequency-domain Attack (SFAttack), which uses local perturbations focused on time series shapelets to enhance discriminative information and stealthiness. Additionally, we introduce a low-frequency constraint to confine perturbations to high-frequency components, enhancing imperceptibility.
Similar Papers
Revisiting Backdoor Attacks on Time Series Classification in the Frequency Domain
Machine Learning (CS)
Makes computer programs easily tricked by bad data.
Intriguing Frequency Interpretation of Adversarial Robustness for CNNs and ViTs
CV and Pattern Recognition
Makes AI better at spotting fake pictures.
Fre-CW: Targeted Attack on Time Series Forecasting using Frequency Domain Loss
Machine Learning (CS)
Makes computer predictions harder to trick.