SGM: Safety Glasses for Multimodal Large Language Models via Neuron-Level Detoxification
By: Hongbo Wang, MaungMaung AprilPyone, Isao Echizen
Potential Business Impact:
Stops AI from saying harmful or bad things.
Disclaimer: Samples in this paper may be harmful and cause discomfort. Multimodal large language models (MLLMs) enable multimodal generation but inherit toxic, biased, and NSFW signals from weakly curated pretraining corpora, causing safety risks, especially under adversarial triggers that late, opaque training-free detoxification methods struggle to handle. We propose SGM, a white-box neuron-level multimodal intervention that acts like safety glasses for toxic neurons: it selectively recalibrates a small set of toxic expert neurons via expertise-weighted soft suppression, neutralizing harmful cross-modal activations without any parameter updates. We establish MM-TOXIC-QA, a multimodal toxicity evaluation framework, and compare SGM with existing detoxification techniques. Experiments on open-source MLLMs show that SGM mitigates toxicity in standard and adversarial conditions, cutting harmful rates from 48.2\% to 2.5\% while preserving fluency and multimodal reasoning. SGM is extensible, and its combined defenses, denoted as SGM*, integrate with existing detoxification methods for stronger safety performance, providing an interpretable, low-cost solution for toxicity-controlled multimodal generation.
Similar Papers
Toxicity Red-Teaming: Benchmarking LLM Safety in Singapore's Low-Resource Languages
Computation and Language
Makes AI safer for different languages.
Zero-Shot Defense Against Toxic Images via Inherent Multimodal Alignment in LVLMs
Computation and Language
Stops computers from seeing bad pictures.
UpSafe$^\circ$C: Upcycling for Controllable Safety in Large Language Models
Artificial Intelligence
Makes AI safer without losing smarts.