Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models
By: Xinkai Zhao , Yuta Tokuoka , Junichiro Iwasawa and more
Potential Business Impact:
Finds if your private medical images trained AI.
The increasing use of diffusion models for image generation, especially in sensitive areas like medical imaging, has raised significant privacy concerns. Membership Inference Attack (MIA) has emerged as a potential approach to determine if a specific image was used to train a diffusion model, thus quantifying privacy risks. Existing MIA methods often rely on diffusion reconstruction errors, where member images are expected to have lower reconstruction errors than non-member images. However, applying these methods directly to medical images faces challenges. Reconstruction error is influenced by inherent image difficulty, and diffusion models struggle with high-frequency detail reconstruction. To address these issues, we propose a Frequency-Calibrated Reconstruction Error (FCRE) method for MIAs on medical image diffusion models. By focusing on reconstruction errors within a specific mid-frequency range and excluding both high-frequency (difficult to reconstruct) and low-frequency (less informative) regions, our frequency-selective approach mitigates the confounding factor of inherent image difficulty. Specifically, we analyze the reverse diffusion process, obtain the mid-frequency reconstruction error, and compute the structural similarity index score between the reconstructed and original images. Membership is determined by comparing this score to a threshold. Experiments on several medical image datasets demonstrate that our FCRE method outperforms existing MIA methods.
Similar Papers
Unveiling Impact of Frequency Components on Membership Inference Attacks for Diffusion Models
Cryptography and Security
Finds if your pictures were used to train AI.
Membership Inference Attacks fueled by Few-Short Learning to detect privacy leakage tackling data integrity
Cryptography and Security
Finds if private data was used to train AI.
Membership Inference Attacks for Face Images Against Fine-Tuned Latent Diffusion Models
CV and Pattern Recognition
Finds if your photos trained AI art.