Score: 1

SpatialJB: How Text Distribution Art Becomes the "Jailbreak Key" for LLM Guardrails

Published: January 14, 2026 | arXiv ID: 2601.09321v1

By: Zhiyi Mou , Jingyuan Yang , Zeheng Qian and more

Potential Business Impact:

Lets bad words sneak past computer safety rules.

Business Areas:
Text Analytics Data and Analytics, Software

While Large Language Models (LLMs) have powerful capabilities, they remain vulnerable to jailbreak attacks, which is a critical barrier to their safe web real-time application. Current commercial LLM providers deploy output guardrails to filter harmful outputs, yet these defenses are not impenetrable. Due to LLMs' reliance on autoregressive, token-by-token inference, their semantic representations lack robustness to spatially structured perturbations, such as redistributing tokens across different rows, columns, or diagonals. Exploiting the Transformer's spatial weakness, we propose SpatialJB to disrupt the model's output generation process, allowing harmful content to bypass guardrails without detection. Comprehensive experiments conducted on leading LLMs get nearly 100% ASR, demonstrating the high effectiveness of SpatialJB. Even after adding advanced output guardrails, like the OpenAI Moderation API, SpatialJB consistently maintains a success rate exceeding 75%, outperforming current jailbreak techniques by a significant margin. The proposal of SpatialJB exposes a key weakness in current guardrails and emphasizes the importance of spatial semantics, offering new insights to advance LLM safety research. To prevent potential misuse, we also present baseline defense strategies against SpatialJB and evaluate their effectiveness in mitigating such attacks. The code for the attack, baseline defenses, and a demo are available at https://anonymous.4open.science/r/SpatialJailbreak-8E63.

Country of Origin
🇨🇳 🇦🇺 🇭🇰 Australia, China, Hong Kong

Page Count
13 pages

Category
Computer Science:
Cryptography and Security