Agentic Moderation: Multi-Agent Design for Safer Vision-Language Models
By: Juan Ren, Mark Dras, Usman Naseem
Potential Business Impact:
Protects AI from being tricked into doing bad things.
Agentic methods have emerged as a powerful and autonomous paradigm that enhances reasoning, collaboration, and adaptive control, enabling systems to coordinate and independently solve complex tasks. We extend this paradigm to safety alignment by introducing Agentic Moderation, a model-agnostic framework that leverages specialised agents to defend multimodal systems against jailbreak attacks. Unlike prior approaches that apply as a static layer over inputs or outputs and provide only binary classifications (safe or unsafe), our method integrates dynamic, cooperative agents, including Shield, Responder, Evaluator, and Reflector, to achieve context-aware and interpretable moderation. Extensive experiments across five datasets and four representative Large Vision-Language Models (LVLMs) demonstrate that our approach reduces the Attack Success Rate (ASR) by 7-19%, maintains a stable Non-Following Rate (NF), and improves the Refusal Rate (RR) by 4-20%, achieving robust, interpretable, and well-balanced safety performance. By harnessing the flexibility and reasoning capacity of agentic architectures, Agentic Moderation provides modular, scalable, and fine-grained safety enforcement, highlighting the broader potential of agentic systems as a foundation for automated safety governance.
Similar Papers
Toward Trustworthy Agentic AI: A Multimodal Framework for Preventing Prompt Injection Attacks
Cryptography and Security
Protects smart AI from bad instructions.
Toward a Safe Internet of Agents
Multiagent Systems
Makes AI agents safer and more trustworthy.
A Survey on Agentic Multimodal Large Language Models
CV and Pattern Recognition
AI learns to plan, use tools, and act.