Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
By: John Hawkins , Aditya Pramar , Rodney Beard and more
Potential Business Impact:
Stops AI from being tricked into saying bad things.
Large Language Models (LLMs) suffer from a range of vulnerabilities that allow malicious users to solicit undesirable responses through manipulation of the input text. These so-called jailbreak prompts are designed to trick the LLM into circumventing the safety guardrails put in place to keep responses acceptable to the developer's policies. In this study, we analyse the ability of different machine learning models to distinguish jailbreak prompts from genuine uses, including looking at our ability to identify jailbreaks that use previously unseen strategies. Our results indicate that using current datasets the best performance is achieved by fine tuning a Bidirectional Encoder Representations from Transformers (BERT) model end-to-end for identifying jailbreaks. We visualise the keywords that distinguish jailbreak from genuine prompts and conclude that explicit reflexivity in prompt structure could be a signal of jailbreak intention.
Similar Papers
NLP Methods for Detecting Novel LLM Jailbreaks and Keyword Analysis with BERT
Computation and Language
Stops AI from being tricked into saying bad things.
Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models
Computation and Language
Finds when AI is tricked into breaking rules.
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.