Rethinking Jailbreak Detection of Large Vision Language Models with Representational Contrastive Scoring
By: Peichun Hua , Hao Li , Shanghao Shi and more
Potential Business Impact:
Stops AI from being tricked by bad questions.
Large Vision-Language Models (LVLMs) are vulnerable to a growing array of multimodal jailbreak attacks, necessitating defenses that are both generalizable to novel threats and efficient for practical deployment. Many current strategies fall short, either targeting specific attack patterns, which limits generalization, or imposing high computational overhead. While lightweight anomaly-detection methods offer a promising direction, we find that their common one-class design tends to confuse novel benign inputs with malicious ones, leading to unreliable over-rejection. To address this, we propose Representational Contrastive Scoring (RCS), a framework built on a key insight: the most potent safety signals reside within the LVLM's own internal representations. Our approach inspects the internal geometry of these representations, learning a lightweight projection to maximally separate benign and malicious inputs in safety-critical layers. This enables a simple yet powerful contrastive score that differentiates true malicious intent from mere novelty. Our instantiations, MCD (Mahalanobis Contrastive Detection) and KCD (K-nearest Contrastive Detection), achieve state-of-the-art performance on a challenging evaluation protocol designed to test generalization to unseen attack types. This work demonstrates that effective jailbreak detection can be achieved by applying simple, interpretable statistical methods to the appropriate internal representations, offering a practical path towards safer LVLM deployment. Our code is available on Github https://github.com/sarendis56/Jailbreak_Detection_RCS.
Similar Papers
Learning to Detect Unknown Jailbreak Attacks in Large Vision-Language Models: A Unified and Accurate Approach
Cryptography and Security
Stops AI from being tricked by bad questions.
Learning to Detect Unknown Jailbreak Attacks in Large Vision-Language Models
CV and Pattern Recognition
Stops AI from being tricked into bad things.
VRSA: Jailbreaking Multimodal Large Language Models through Visual Reasoning Sequential Attack
CV and Pattern Recognition
Stops AI from being tricked into saying bad things.