Safe Semantics, Unsafe Interpretations: Tackling Implicit Reasoning Safety in Large Vision-Language Models
By: Wei Cai , Jian Zhao , Yuchu Jiang and more
Potential Business Impact:
Fixes AI that makes bad choices from mixed pictures and words.
Large Vision-Language Models face growing safety challenges with multimodal inputs. This paper introduces the concept of Implicit Reasoning Safety, a vulnerability in LVLMs. Benign combined inputs trigger unsafe LVLM outputs due to flawed or hidden reasoning. To showcase this, we developed Safe Semantics, Unsafe Interpretations, the first dataset for this critical issue. Our demonstrations show that even simple In-Context Learning with SSUI significantly mitigates these implicit multimodal threats, underscoring the urgent need to improve cross-modal implicit reasoning.
Similar Papers
When Safe Unimodal Inputs Collide: Optimizing Reasoning Chains for Cross-Modal Safety in Multimodal Large Language Models
Artificial Intelligence
Makes AI safer by teaching it to think carefully.
When Safe Unimodal Inputs Collide: Optimizing Reasoning Chains for Cross-Modal Safety in Multimodal Large Language Models
Artificial Intelligence
Makes AI safer by teaching it to think carefully.
VLSU: Mapping the Limits of Joint Multimodal Understanding for AI Safety
CV and Pattern Recognition
Finds when pictures and words together make AI unsafe.