Score: 0

SHIELD: Classifier-Guided Prompting for Robust and Safer LVLMs

Published: October 15, 2025 | arXiv ID: 2510.13190v1

By: Juan Ren, Mark Dras, Usman Naseem

Potential Business Impact:

Stops AI from being tricked by bad instructions.

Business Areas:
Guides Media and Entertainment

Large Vision-Language Models (LVLMs) unlock powerful multimodal reasoning but also expand the attack surface, particularly through adversarial inputs that conceal harmful goals in benign prompts. We propose SHIELD, a lightweight, model-agnostic preprocessing framework that couples fine-grained safety classification with category-specific guidance and explicit actions (Block, Reframe, Forward). Unlike binary moderators, SHIELD composes tailored safety prompts that enforce nuanced refusals or safe redirection without retraining. Across five benchmarks and five representative LVLMs, SHIELD consistently lowers jailbreak and non-following rates while preserving utility. Our method is plug-and-play, incurs negligible overhead, and is easily extendable to new attack types -- serving as a practical safety patch for both weakly and strongly aligned LVLMs.

Country of Origin
🇦🇺 Australia

Page Count
14 pages

Category
Computer Science:
Computation and Language