False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize
By: Cheng Wang , Zeming Wei , Qin Liu and more
Potential Business Impact:
Finds AI safety checks are easily fooled.
Large Language Models (LLMs) can comply with harmful instructions, raising serious safety concerns despite their impressive capabilities. Recent work has leveraged probing-based approaches to study the separability of malicious and benign inputs in LLMs' internal representations, and researchers have proposed using such probing methods for safety detection. We systematically re-examine this paradigm. Motivated by poor out-of-distribution performance, we hypothesize that probes learn superficial patterns rather than semantic harmfulness. Through controlled experiments, we confirm this hypothesis and identify the specific patterns learned: instructional patterns and trigger words. Our investigation follows a systematic approach, progressing from demonstrating comparable performance of simple n-gram methods, to controlled experiments with semantically cleaned datasets, to detailed analysis of pattern dependencies. These results reveal a false sense of security around current probing-based approaches and highlight the need to redesign both models and evaluation protocols, for which we provide further discussions in the hope of suggesting responsible further research in this direction. We have open-sourced the project at https://github.com/WangCheng0116/Why-Probe-Fails.
Similar Papers
Prefix Probing: Lightweight Harmful Content Detection for Large Language Models
Artificial Intelligence
Finds bad online stuff fast, cheaply.
That's not natural: The Impact of Off-Policy Training Data on Probe Performance
Artificial Intelligence
Helps AI understand if it's lying or being fake.
Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems
Cryptography and Security
Stops bad guys from stealing secrets from smart computer programs.