Score: 1

False Sense of Security: Why Probing-based Malicious Input Detection Fails to Generalize

Published: September 4, 2025 | arXiv ID: 2509.03888v1

By: Cheng Wang , Zeming Wei , Qin Liu and more

Potential Business Impact:

Finds AI safety checks are easily fooled.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Large Language Models (LLMs) can comply with harmful instructions, raising serious safety concerns despite their impressive capabilities. Recent work has leveraged probing-based approaches to study the separability of malicious and benign inputs in LLMs' internal representations, and researchers have proposed using such probing methods for safety detection. We systematically re-examine this paradigm. Motivated by poor out-of-distribution performance, we hypothesize that probes learn superficial patterns rather than semantic harmfulness. Through controlled experiments, we confirm this hypothesis and identify the specific patterns learned: instructional patterns and trigger words. Our investigation follows a systematic approach, progressing from demonstrating comparable performance of simple n-gram methods, to controlled experiments with semantically cleaned datasets, to detailed analysis of pattern dependencies. These results reveal a false sense of security around current probing-based approaches and highlight the need to redesign both models and evaluation protocols, for which we provide further discussions in the hope of suggesting responsible further research in this direction. We have open-sourced the project at https://github.com/WangCheng0116/Why-Probe-Fails.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ China, Singapore

Page Count
15 pages

Category
Computer Science:
Computation and Language