CLOAK: Contrastive Guidance for Latent Diffusion-Based Data Obfuscation
By: Xin Yang, Omid Ardakanian
Potential Business Impact:
Keeps your private data safe on phones.
Data obfuscation is a promising technique for mitigating attribute inference attacks by semi-trusted parties with access to time-series data emitted by sensors. Recent advances leverage conditional generative models together with adversarial training or mutual information-based regularization to balance data privacy and utility. However, these methods often require modifying the downstream task, struggle to achieve a satisfactory privacy-utility trade-off, or are computationally intensive, making them impractical for deployment on resource-constrained mobile IoT devices. We propose Cloak, a novel data obfuscation framework based on latent diffusion models. In contrast to prior work, we employ contrastive learning to extract disentangled representations, which guide the latent diffusion process to retain useful information while concealing private information. This approach enables users with diverse privacy needs to navigate the privacy-utility trade-off with minimal retraining. Extensive experiments on four public time-series datasets, spanning multiple sensing modalities, and a dataset of facial images demonstrate that Cloak consistently outperforms state-of-the-art obfuscation techniques and is well-suited for deployment in resource-constrained settings.
Similar Papers
On the Importance of Conditioning for Privacy-Preserving Data Augmentation
CV and Pattern Recognition
Makes fake pictures that trick people-finding tools.
Enhanced Privacy Leakage from Noise-Perturbed Gradients via Gradient-Guided Conditional Diffusion Models
Cryptography and Security
Steals private pictures from shared computer learning.
Differential Privacy for Secure Machine Learning in Healthcare IoT-Cloud Systems
Cryptography and Security
Faster, private medical help using smart devices.