GuardTrace-VL: Detecting Unsafe Multimodel Reasoning via Iterative Safety Supervision
By: Yuxiao Xiang , Junchi Chen , Zhenchao Jin and more
Potential Business Impact:
Keeps AI from making bad choices while thinking.
Multimodal large reasoning models (MLRMs) are increasingly deployed for vision-language tasks that produce explicit intermediate rationales. However, reasoning traces can contain unsafe content even when the final answer is non-harmful, creating deployment risks. Existing multimodal safety guards primarily evaluate only the input question and the final answer, neglecting the intermediate reasoning process. This oversight allows undetected harm, such as biased inferences or policy-violating use of visual context, to emerge during reasoning. We introduce GuardTrace-VL, a vision-aware safety auditor that monitors the full Question-Thinking-Answer (QTA) pipeline via joint image-text analysis, enabling detection of unsafe content as it emerges in the reasoning stage. To support training and evaluation, we construct the GuardTrace dataset, which is generated through diverse prompting strategies and refined via a MLRM- and human-based voting and verification pipeline. Furthermore, we propose a three-stage progressive training scheme combined with the data refinement process, enabling the model to learn nuanced and context-dependent safety preferences according to different risk levels. On our proposed test set covering both in-domain and out-of-domain scenarios, GuardTrace-VL model achieves an F1 score of 93.1% on unsafe reasoning detection tasks, representing a 13.5% improvement in F1 score compared to the previous strongest multimodal safety defense methods. The codes will be made publicly available.
Similar Papers
SaFeR-VLM: Toward Safety-aware Fine-grained Reasoning in Multimodal Models
Machine Learning (CS)
Makes AI safer by teaching it to think carefully.
VLMGuard-R1: Proactive Safety Alignment for VLMs via Reasoning-Driven Prompt Optimization
Machine Learning (CS)
Makes AI safer by understanding pictures and words.
ReasoningGuard: Safeguarding Large Reasoning Models with Inference-time Safety Aha Moments
Computation and Language
Stops smart computers from saying bad things.