OmniGuard: Unified Omni-Modal Guardrails with Deliberate Reasoning
By: Boyu Zhu , Xiaofei Wen , Wenjie Jacky Mo and more
Potential Business Impact:
Keeps AI safe when it sees, hears, and reads.
Omni-modal Large Language Models (OLLMs) that process text, images, videos, and audio introduce new challenges for safety and value guardrails in human-AI interaction. Prior guardrail research largely targets unimodal settings and typically frames safeguarding as binary classification, which limits robustness across diverse modalities and tasks. To address this gap, we propose OmniGuard, the first family of omni-modal guardrails that performs safeguarding across all modalities with deliberate reasoning ability. To support the training of OMNIGUARD, we curate a large, comprehensive omni-modal safety dataset comprising over 210K diverse samples, with inputs that cover all modalities through both unimodal and cross-modal samples. Each sample is annotated with structured safety labels and carefully curated safety critiques from expert models through targeted distillation. Extensive experiments on 15 benchmarks show that OmniGuard achieves strong effectiveness and generalization across a wide range of multimodal safety scenarios. Importantly, OmniGuard provides a unified framework that enforces policies and mitigates risks in omni-modalities, paving the way toward building more robust and capable omnimodal safeguarding systems.
Similar Papers
OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Modalities
Computation and Language
Stops bad AI requests in any language or form.
ProGuard: Towards Proactive Multimodal Safeguard
CV and Pattern Recognition
Finds and explains new AI dangers before they happen.
Protect: Towards Robust Guardrailing Stack for Trustworthy Enterprise LLM Systems
Computation and Language
Keeps AI safe with text, pictures, and sounds.