Reimagining Safety Alignment with An Image
By: Yifan Xia , Guorui Chen , Wenqian Yu and more
Potential Business Impact:
Makes AI safer and more helpful for everyone.
Large language models (LLMs) excel in diverse applications but face dual challenges: generating harmful content under jailbreak attacks and over-refusal of benign queries due to rigid safety mechanisms. These issues are further complicated by the need to accommodate different value systems and precisely align with given safety preferences. Moreover, traditional methods like SFT and RLHF lack this capability due to their costly parameter tuning requirements and inability to support multiple value systems within a single model. These problems are more obvious in multimodal large language models (MLLMs), especially in terms of heightened over-refusal in cross-modal tasks and new security risks arising from expanded attack surfaces. We propose Magic Image, an optimization-driven visual prompt framework that enhances security while reducing over-refusal. By optimizing image prompts using harmful/benign samples, our method enables a single model to adapt to different value systems and better align with given safety preferences without parameter updates. Experiments demonstrate improved safety-effectiveness balance across diverse datasets while preserving model performance, offering a practical solution for deployable MLLM safety alignment.
Similar Papers
Spot Risks Before Speaking! Unraveling Safety Attention Heads in Large Vision-Language Models
Machine Learning (CS)
Finds hidden "safety heads" to block bad AI prompts.
Jailbreaking Safeguarded Text-to-Image Models via Large Language Models
Cryptography and Security
Makes AI art generators create forbidden images.
Security Tensors as a Cross-Modal Bridge: Extending Text-Aligned Safety to Vision in LVLM
CV and Pattern Recognition
Keeps AI safe from bad pictures.