Score: 0

Jailbreak Detection in Clinical Training LLMs Using Feature-Based Predictive Models

Published: April 21, 2025 | arXiv ID: 2505.00010v1

By: Tri Nguyen , Lohith Srikanth Pentapalli , Magnus Sieverding and more

Potential Business Impact:

Finds when AI is tricked into breaking rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Jailbreaking in Large Language Models (LLMs) threatens their safe use in sensitive domains like education by allowing users to bypass ethical safeguards. This study focuses on detecting jailbreaks in 2-Sigma, a clinical education platform that simulates patient interactions using LLMs. We annotated over 2,300 prompts across 158 conversations using four linguistic variables shown to correlate strongly with jailbreak behavior. The extracted features were used to train several predictive models, including Decision Trees, Fuzzy Logic-based classifiers, Boosting methods, and Logistic Regression. Results show that feature-based predictive models consistently outperformed Prompt Engineering, with the Fuzzy Decision Tree achieving the best overall performance. Our findings demonstrate that linguistic-feature-based models are effective and explainable alternatives for jailbreak detection. We suggest future work explore hybrid frameworks that integrate prompt-based flexibility with rule-based robustness for real-time, spectrum-based jailbreak monitoring in educational LLMs.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
12 pages

Category
Computer Science:
Computation and Language