Score: 0

Risk-adaptive Activation Steering for Safe Multimodal Large Language Models

Published: October 15, 2025 | arXiv ID: 2510.13698v1

By: Jonghyun Park, Minhyuk Seo, Jonghyun Choi

Potential Business Impact:

AI learns to spot bad pictures and be helpful.

Business Areas:
Image Recognition Data and Analytics, Software

One of the key challenges of modern AI models is ensuring that they provide helpful responses to benign queries while refusing malicious ones. But often, the models are vulnerable to multimodal queries with harmful intent embedded in images. One approach for safety alignment is training with extensive safety datasets at the significant costs in both dataset curation and training. Inference-time alignment mitigates these costs, but introduces two drawbacks: excessive refusals from misclassified benign queries and slower inference speed due to iterative output adjustments. To overcome these limitations, we propose to reformulate queries to strengthen cross-modal attention to safety-critical image regions, enabling accurate risk assessment at the query level. Using the assessed risk, it adaptively steers activations to generate responses that are safe and helpful without overhead from iterative output adjustments. We call this Risk-adaptive Activation Steering (RAS). Extensive experiments across multiple benchmarks on multimodal safety and utility demonstrate that the RAS significantly reduces attack success rates, preserves general task performance, and improves inference speed over prior inference-time defenses.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
23 pages

Category
Computer Science:
CV and Pattern Recognition