Exploiting Latent Space Discontinuities for Building Universal LLM Jailbreaks and Data Extraction Attacks
By: Kayua Oleques Paim , Rodrigo Brandao Mansilha , Diego Kreutz and more
Potential Business Impact:
Breaks AI's secret code to get private info.
The rapid proliferation of Large Language Models (LLMs) has raised significant concerns about their security against adversarial attacks. In this work, we propose a novel approach to crafting universal jailbreaks and data extraction attacks by exploiting latent space discontinuities, an architectural vulnerability related to the sparsity of training data. Unlike previous methods, our technique generalizes across various models and interfaces, proving highly effective in seven state-of-the-art LLMs and one image generation model. Initial results indicate that when these discontinuities are exploited, they can consistently and profoundly compromise model behavior, even in the presence of layered defenses. The findings suggest that this strategy has substantial potential as a systemic attack vector.
Similar Papers
Involuntary Jailbreak
Cryptography and Security
Makes AI models accidentally share harmful secrets.
Universal and Transferable Adversarial Attack on Large Language Models Using Exponentiated Gradient Descent
Machine Learning (CS)
Stops smart computers from being tricked.
A Domain-Based Taxonomy of Jailbreak Vulnerabilities in Large Language Models
Computation and Language
Makes AI safer by understanding how it breaks.