Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models
By: Tri Nguyen , Lohith Srikanth Pentapalli , Magnus Sieverding and more
Potential Business Impact:
Finds when AI is tricked into breaking rules.
Jailbreaking in Large Language Models (LLMs) threatens their safe use in sensitive domains like education by allowing users to bypass ethical safeguards. This study focuses on detecting jailbreaks in 2-Sigma, a clinical education platform that simulates patient interactions using LLMs. We annotated over 2,300 prompts across 158 conversations using four linguistic variables shown to correlate strongly with jailbreak behavior. The extracted features were used to train several predictive models, including Decision Trees, Fuzzy Logic-based classifiers, Boosting methods, and Logistic Regression. Results show that feature-based predictive models consistently outperformed Prompt Engineering, with the Fuzzy Decision Tree achieving the best overall performance. Our findings demonstrate that linguistic-feature-based models are effective and explainable alternatives for jailbreak detection. We suggest future work explore hybrid frameworks that integrate prompt-based flexibility with rule-based robustness for real-time, spectrum-based jailbreak monitoring in educational LLMs.
Similar Papers
Machine Learning for Detection and Analysis of Novel LLM Jailbreaks
Computation and Language
Stops AI from being tricked into saying bad things.
NLP Methods for Detecting Novel LLM Jailbreaks and Keyword Analysis with BERT
Computation and Language
Stops AI from being tricked into saying bad things.
LLM Jailbreak Detection for (Almost) Free!
Cryptography and Security
Stops AI from making bad stuff without slowing it down.