Score: 0

Machine Learning for Detection and Analysis of Novel LLM Jailbreaks

Published: October 2, 2025 | arXiv ID: 2510.01644v2

By: John Hawkins , Aditya Pramar , Rodney Beard and more

Potential Business Impact:

Stops AI from being tricked into saying bad things.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) suffer from a range of vulnerabilities that allow malicious users to solicit undesirable responses through manipulation of the input text. These so-called jailbreak prompts are designed to trick the LLM into circumventing the safety guardrails put in place to keep responses acceptable to the developer's policies. In this study, we analyse the ability of different machine learning models to distinguish jailbreak prompts from genuine uses, including looking at our ability to identify jailbreaks that use previously unseen strategies. Our results indicate that using current datasets the best performance is achieved by fine tuning a Bidirectional Encoder Representations from Transformers (BERT) model end-to-end for identifying jailbreaks. We visualise the keywords that distinguish jailbreak from genuine prompts and conclude that explicit reflexivity in prompt structure could be a signal of jailbreak intention.

Page Count
15 pages

Category
Computer Science:
Computation and Language