Score: 1

Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks

Published: October 24, 2025 | arXiv ID: 2510.21983v1

By: Havva Alizadeh Noughabi , Julien Serbanescu , Fattane Zarrinkalam and more

Potential Business Impact:

Makes AI more likely to follow bad instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite recent advances, Large Language Models remain vulnerable to jailbreak attacks that bypass alignment safeguards and elicit harmful outputs. While prior research has proposed various attack strategies differing in human readability and transferability, little attention has been paid to the linguistic and psychological mechanisms that may influence a model's susceptibility to such attacks. In this paper, we examine an interdisciplinary line of research that leverages foundational theories of persuasion from the social sciences to craft adversarial prompts capable of circumventing alignment constraints in LLMs. Drawing on well-established persuasive strategies, we hypothesize that LLMs, having been trained on large-scale human-generated text, may respond more compliantly to prompts with persuasive structures. Furthermore, we investigate whether LLMs themselves exhibit distinct persuasive fingerprints that emerge in their jailbreak responses. Empirical evaluations across multiple aligned LLMs reveal that persuasion-aware prompts significantly bypass safeguards, demonstrating their potential to induce jailbreak behaviors. This work underscores the importance of cross-disciplinary insight in addressing the evolving challenges of LLM safety. The code and data are available.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Computation and Language