An Audit and Analysis of LLM-Assisted Health Misinformation Jailbreaks Against LLMs
By: Ayana Hussain, Patrick Zhao, Nicholas Vincent
Potential Business Impact:
Helps computers spot fake health news online.
Large Language Models (LLMs) are a double-edged sword capable of generating harmful misinformation -- inadvertently, or when prompted by "jailbreak" attacks that attempt to produce malicious outputs. LLMs could, with additional research, be used to detect and prevent the spread of misinformation. In this paper, we investigate the efficacy and characteristics of LLM-produced jailbreak attacks that cause other models to produce harmful medical misinformation. We also study how misinformation generated by jailbroken LLMs compares to typical misinformation found on social media, and how effectively it can be detected using standard machine learning approaches. Specifically, we closely examine 109 distinct attacks against three target LLMs and compare the attack prompts to in-the-wild health-related LLM queries. We also examine the resulting jailbreak responses, comparing the generated misinformation to health-related misinformation on Reddit. Our findings add more evidence that LLMs can be effectively used to detect misinformation from both other LLMs and from people, and support a body of work suggesting that with careful design, LLMs can contribute to a healthier overall information ecosystem.
Similar Papers
Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs
Cryptography and Security
AI models can be tricked into giving bad advice.
Confusion is the Final Barrier: Rethinking Jailbreak Evaluation and Investigating the Real Misuse Threat of LLMs
Cryptography and Security
AI models can be tricked into saying bad things.
Uncovering the Persuasive Fingerprint of LLMs in Jailbreaking Attacks
Computation and Language
Makes AI more likely to follow bad instructions.