Perturbation-Induced Linearization: Constructing Unlearnable Data with Solely Linear Classifiers
By: Jinlin Liu, Wei Chen, Xiaojin Zhang
Potential Business Impact:
Makes AI forget private info in pictures.
Collecting web data to train deep models has become increasingly common, raising concerns about unauthorized data usage. To mitigate this issue, unlearnable examples introduce imperceptible perturbations into data, preventing models from learning effectively. However, existing methods typically rely on deep neural networks as surrogate models for perturbation generation, resulting in significant computational costs. In this work, we propose Perturbation-Induced Linearization (PIL), a computationally efficient yet effective method that generates perturbations using only linear surrogate models. PIL achieves comparable or better performance than existing surrogate-based methods while reducing computational time dramatically. We further reveal a key mechanism underlying unlearnable examples: inducing linearization to deep models, which explains why PIL can achieve competitive results in a very short time. Beyond this, we provide an analysis about the property of unlearnable examples under percentage-based partial perturbation. Our work not only provides a practical approach for data protection but also offers insights into what makes unlearnable examples effective.
Similar Papers
Inducing Uncertainty for Test-Time Privacy
Machine Learning (CS)
Makes AI forget data, even when it tries.
How Far Are We from True Unlearnability?
Machine Learning (CS)
Protects data so computers can't learn from it.
Data-Free Privacy-Preserving for LLMs via Model Inversion and Selective Unlearning
Cryptography and Security
Removes private info from AI without training data.