Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation of LLM
By: Chi Zhang , Changjia Zhu , Junjie Xiong and more
Potential Business Impact:
Makes AI safer and less likely to say bad things.
Large Language Models (LLMs) have revolutionized content creation across digital platforms, offering unprecedented capabilities in natural language generation and understanding. These models enable beneficial applications such as content generation, question and answering (Q&A), programming, and code reasoning. Meanwhile, they also pose serious risks by inadvertently or intentionally producing toxic, offensive, or biased content. This dual role of LLMs, both as powerful tools for solving real-world problems and as potential sources of harmful language, presents a pressing sociotechnical challenge. In this survey, we systematically review recent studies spanning unintentional toxicity, adversarial jailbreaking attacks, and content moderation techniques. We propose a unified taxonomy of LLM-related harms and defenses, analyze emerging multimodal and LLM-assisted jailbreak strategies, and assess mitigation efforts, including reinforcement learning with human feedback (RLHF), prompt engineering, and safety alignment. Our synthesis highlights the evolving landscape of LLM safety, identifies limitations in current evaluation methodologies, and outlines future research directions to guide the development of robust and ethically aligned language technologies.
Similar Papers
Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation
Computation and Language
Makes AI safer and less likely to say bad things.
The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMs
Computation and Language
Makes AI safer by checking its bad ideas.
An Audit and Analysis of LLM-Assisted Health Misinformation Jailbreaks Against LLMs
Computation and Language
Helps computers spot fake health news online.