Score: 0

Do Internal Layers of LLMs Reveal Patterns for Jailbreak Detection?

Published: October 8, 2025 | arXiv ID: 2510.06594v1

By: Sri Durga Sai Sowmya Kadali, Evangelos E. Papalexakis

Potential Business Impact:

Stops AI from being tricked into saying bad things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Jailbreaking large language models (LLMs) has emerged as a pressing concern with the increasing prevalence and accessibility of conversational LLMs. Adversarial users often exploit these models through carefully engineered prompts to elicit restricted or sensitive outputs, a strategy widely referred to as jailbreaking. While numerous defense mechanisms have been proposed, attackers continuously develop novel prompting techniques, and no existing model can be considered fully resistant. In this study, we investigate the jailbreak phenomenon by examining the internal representations of LLMs, with a focus on how hidden layers respond to jailbreak versus benign prompts. Specifically, we analyze the open-source LLM GPT-J and the state-space model Mamba2, presenting preliminary findings that highlight distinct layer-wise behaviors. Our results suggest promising directions for further research on leveraging internal model dynamics for robust jailbreak detection and defense.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Computation and Language