Risk-adaptive Activation Steering for Safe Multimodal Large Language Models
By: Jonghyun Park, Minhyuk Seo, Jonghyun Choi
Potential Business Impact:
AI learns to spot bad pictures and be helpful.
One of the key challenges of modern AI models is ensuring that they provide helpful responses to benign queries while refusing malicious ones. But often, the models are vulnerable to multimodal queries with harmful intent embedded in images. One approach for safety alignment is training with extensive safety datasets at the significant costs in both dataset curation and training. Inference-time alignment mitigates these costs, but introduces two drawbacks: excessive refusals from misclassified benign queries and slower inference speed due to iterative output adjustments. To overcome these limitations, we propose to reformulate queries to strengthen cross-modal attention to safety-critical image regions, enabling accurate risk assessment at the query level. Using the assessed risk, it adaptively steers activations to generate responses that are safe and helpful without overhead from iterative output adjustments. We call this Risk-adaptive Activation Steering (RAS). Extensive experiments across multiple benchmarks on multimodal safety and utility demonstrate that the RAS significantly reduces attack success rates, preserves general task performance, and improves inference speed over prior inference-time defenses.
Similar Papers
Automating Steering for Safe Multimodal Large Language Models
Computation and Language
Keeps AI from saying bad things when tricked.
Automating Steering for Safe Multimodal Large Language Models
Computation and Language
Keeps AI from saying bad things when tricked.
R1-ACT: Efficient Reasoning Model Safety Alignment by Activating Safety Knowledge
Artificial Intelligence
Teaches AI to use its safety knowledge better.