Score: 0

OmniGuard: Unified Omni-Modal Guardrails with Deliberate Reasoning

Published: December 2, 2025 | arXiv ID: 2512.02306v1

By: Boyu Zhu , Xiaofei Wen , Wenjie Jacky Mo and more

Potential Business Impact:

Keeps AI safe when it sees, hears, and reads.

Business Areas:
Autonomous Vehicles Transportation

Omni-modal Large Language Models (OLLMs) that process text, images, videos, and audio introduce new challenges for safety and value guardrails in human-AI interaction. Prior guardrail research largely targets unimodal settings and typically frames safeguarding as binary classification, which limits robustness across diverse modalities and tasks. To address this gap, we propose OmniGuard, the first family of omni-modal guardrails that performs safeguarding across all modalities with deliberate reasoning ability. To support the training of OMNIGUARD, we curate a large, comprehensive omni-modal safety dataset comprising over 210K diverse samples, with inputs that cover all modalities through both unimodal and cross-modal samples. Each sample is annotated with structured safety labels and carefully curated safety critiques from expert models through targeted distillation. Extensive experiments on 15 benchmarks show that OmniGuard achieves strong effectiveness and generalization across a wide range of multimodal safety scenarios. Importantly, OmniGuard provides a unified framework that enforces policies and mitigates risks in omni-modalities, paving the way toward building more robust and capable omnimodal safeguarding systems.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence