Score: 2

Embedding Hidden Adversarial Capabilities in Pre-Trained Diffusion Models

Published: April 5, 2025 | arXiv ID: 2504.08782v1

By: Lucas Beerens, Desmond J. Higham

Potential Business Impact:

Makes AI art trick other AIs into making mistakes.

Business Areas:
Darknet Internet Services

We introduce a new attack paradigm that embeds hidden adversarial capabilities directly into diffusion models via fine-tuning, without altering their observable behavior or requiring modifications during inference. Unlike prior approaches that target specific images or adjust the generation process to produce adversarial outputs, our method integrates adversarial functionality into the model itself. The resulting tampered model generates high-quality images indistinguishable from those of the original, yet these images cause misclassification in downstream classifiers at a high rate. The misclassification can be targeted to specific output classes. Users can employ this compromised model unaware of its embedded adversarial nature, as it functions identically to a standard diffusion model. We demonstrate the effectiveness and stealthiness of our approach, uncovering a covert attack vector that raises new security concerns. These findings expose a risk arising from the use of externally-supplied models and highlight the urgent need for robust model verification and defense mechanisms against hidden threats in generative models. The code is available at https://github.com/LucasBeerens/CRAFTed-Diffusion .

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)