Score: 1

Learning to Detect Unknown Jailbreak Attacks in Large Vision-Language Models

Published: October 17, 2025 | arXiv ID: 2510.15430v1

By: Shuang Liang , Zhihao Xu , Jialing Tao and more

BigTech Affiliations: Alibaba

Potential Business Impact:

Stops AI from being tricked into bad things.

Business Areas:
Image Recognition Data and Analytics, Software

Despite extensive alignment efforts, Large Vision-Language Models (LVLMs) remain vulnerable to jailbreak attacks, posing serious safety risks. To address this, existing detection methods either learn attack-specific parameters, which hinders generalization to unseen attacks, or rely on heuristically sound principles, which limit accuracy and efficiency. To overcome these limitations, we propose Learning to Detect (LoD), a general framework that accurately detects unknown jailbreak attacks by shifting the focus from attack-specific learning to task-specific learning. This framework includes a Multi-modal Safety Concept Activation Vector module for safety-oriented representation learning and a Safety Pattern Auto-Encoder module for unsupervised attack classification. Extensive experiments show that our method achieves consistently higher detection AUROC on diverse unknown attacks while improving efficiency. The code is available at https://anonymous.4open.science/r/Learning-to-Detect-51CB.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition