Embedding Hidden Adversarial Capabilities in Pre-Trained Diffusion Models
By: Lucas Beerens, Desmond J. Higham
Potential Business Impact:
Makes AI art trick other AIs into making mistakes.
We introduce a new attack paradigm that embeds hidden adversarial capabilities directly into diffusion models via fine-tuning, without altering their observable behavior or requiring modifications during inference. Unlike prior approaches that target specific images or adjust the generation process to produce adversarial outputs, our method integrates adversarial functionality into the model itself. The resulting tampered model generates high-quality images indistinguishable from those of the original, yet these images cause misclassification in downstream classifiers at a high rate. The misclassification can be targeted to specific output classes. Users can employ this compromised model unaware of its embedded adversarial nature, as it functions identically to a standard diffusion model. We demonstrate the effectiveness and stealthiness of our approach, uncovering a covert attack vector that raises new security concerns. These findings expose a risk arising from the use of externally-supplied models and highlight the urgent need for robust model verification and defense mechanisms against hidden threats in generative models. The code is available at https://github.com/LucasBeerens/CRAFTed-Diffusion .
Similar Papers
Explore the vulnerability of black-box models via diffusion models
CV and Pattern Recognition
Steals AI art to trick other AIs.
AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks
Machine Learning (CS)
Makes computer vision models see fake things.
DiffCAP: Diffusion-based Cumulative Adversarial Purification for Vision Language Models
CV and Pattern Recognition
Fixes AI mistakes caused by tricky images.