MetaBreak: Jailbreaking Online LLM Services via Special Token Manipulation
By: Wentian Zhu , Zhen Xiang , Wei Niu and more
Potential Business Impact:
Breaks AI safety rules using hidden words.
Unlike regular tokens derived from existing text corpora, special tokens are artificially created to annotate structured conversations during the fine-tuning process of Large Language Models (LLMs). Serving as metadata of training data, these tokens play a crucial role in instructing LLMs to generate coherent and context-aware responses. We demonstrate that special tokens can be exploited to construct four attack primitives, with which malicious users can reliably bypass the internal safety alignment of online LLM services and circumvent state-of-the-art (SOTA) external content moderation systems simultaneously. Moreover, we found that addressing this threat is challenging, as aggressive defense mechanisms-such as input sanitization by removing special tokens entirely, as suggested in academia-are less effective than anticipated. This is because such defense can be evaded when the special tokens are replaced by regular ones with high semantic similarity within the tokenizer's embedding space. We systemically evaluated our method, named MetaBreak, on both lab environment and commercial LLM platforms. Our approach achieves jailbreak rates comparable to SOTA prompt-engineering-based solutions when no content moderation is deployed. However, when there is content moderation, MetaBreak outperforms SOTA solutions PAP and GPTFuzzer by 11.6% and 34.8%, respectively. Finally, since MetaBreak employs a fundamentally different strategy from prompt engineering, the two approaches can work synergistically. Notably, empowering MetaBreak on PAP and GPTFuzzer boosts jailbreak rates by 24.3% and 20.2%, respectively.
Similar Papers
BreakFun: Jailbreaking LLMs via Schema Exploitation
Cryptography and Security
Breaks AI's rules to make it say bad things.
Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
Computation and Language
Stops AI from being tricked into saying bad things.
NLP Methods for Detecting Novel LLM Jailbreaks and Keyword Analysis with BERT
Computation and Language
Stops AI from being tricked into saying bad things.