Do Internal Layers of LLMs Reveal Patterns for Jailbreak Detection?
By: Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis
Potential Business Impact:
Stops AI from being tricked into saying bad things.
Jailbreaking large language models (LLMs) has emerged as a pressing concern with the increasing prevalence and accessibility of conversational LLMs. Adversarial users often exploit these models through carefully engineered prompts to elicit restricted or sensitive outputs, a strategy widely referred to as jailbreaking. While numerous defense mechanisms have been proposed, attackers continuously develop novel prompting techniques, and no existing model can be considered fully resistant. In this study, we investigate the jailbreak phenomenon by examining the internal representations of LLMs, with a focus on how hidden layers respond to jailbreak versus benign prompts. Specifically, we analyze the open-source LLM GPT-J and the state-space model Mamba2, presenting preliminary findings that highlight distinct layer-wise behaviors. Our results suggest promising directions for further research on leveraging internal model dynamics for robust jailbreak detection and defense.
Similar Papers
Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
Computation and Language
Stops AI from being tricked into saying bad things.
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.
NLP Methods for Detecting Novel LLM Jailbreaks and Keyword Analysis with BERT
Computation and Language
Stops AI from being tricked into saying bad things.