Defending Diffusion Models Against Membership Inference Attacks via Higher-Order Langevin Dynamics
By: Benjamin Sterling, Yousef El-Laham, Mónica F. Bugallo
Potential Business Impact:
Protects private data used to train AI.
Recent advances in generative artificial intelligence applications have raised new data security concerns. This paper focuses on defending diffusion models against membership inference attacks. This type of attack occurs when the attacker can determine if a certain data point was used to train the model. Although diffusion models are intrinsically more resistant to membership inference attacks than other generative models, they are still susceptible. The defense proposed here utilizes critically-damped higher-order Langevin dynamics, which introduces several auxiliary variables and a joint diffusion process along these variables. The idea is that the presence of auxiliary variables mixes external randomness that helps to corrupt sensitive input data earlier on in the diffusion process. This concept is theoretically investigated and validated on a toy dataset and a speech dataset using the Area Under the Receiver Operating Characteristic (AUROC) curves and the FID metric.
Similar Papers
Inference Attacks Against Graph Generative Diffusion Models
Machine Learning (CS)
Protects private data used to train AI.
Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models
Cryptography and Security
Finds if your pictures were used to train AI.
On the MIA Vulnerability Gap Between Private GANs and Diffusion Models
Machine Learning (CS)
Makes AI art safer from spying.