Score: 0

Unleashing the Power of Pre-trained Encoders for Universal Adversarial Attack Detection

Published: April 1, 2025 | arXiv ID: 2504.00429v1

By: Yinghe Zhang , Chi Liu , Shuai Zhou and more

Potential Business Impact:

Stops sneaky computer tricks from fooling AI.

Business Areas:
Image Recognition Data and Analytics, Software

Adversarial attacks pose a critical security threat to real-world AI systems by injecting human-imperceptible perturbations into benign samples to induce misclassification in deep learning models. While existing detection methods, such as Bayesian uncertainty estimation and activation pattern analysis, have achieved progress through feature engineering, their reliance on handcrafted feature design and prior knowledge of attack patterns limits generalization capabilities and incurs high engineering costs. To address these limitations, this paper proposes a lightweight adversarial detection framework based on the large-scale pre-trained vision-language model CLIP. Departing from conventional adversarial feature characterization paradigms, we innovatively adopt an anomaly detection perspective. By jointly fine-tuning CLIP's dual visual-text encoders with trainable adapter networks and learnable prompts, we construct a compact representation space tailored for natural images. Notably, our detection architecture achieves substantial improvements in generalization capability across both known and unknown attack patterns compared to traditional methods, while significantly reducing training overhead. This study provides a novel technical pathway for establishing a parameter-efficient and attack-agnostic defense paradigm, markedly enhancing the robustness of vision systems against evolving adversarial threats.

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition