Score: 2

Perturbation-Induced Linearization: Constructing Unlearnable Data with Solely Linear Classifiers

Published: January 27, 2026 | arXiv ID: 2601.19967v1

By: Jinlin Liu, Wei Chen, Xiaojin Zhang

Potential Business Impact:

Makes AI forget private info in pictures.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Collecting web data to train deep models has become increasingly common, raising concerns about unauthorized data usage. To mitigate this issue, unlearnable examples introduce imperceptible perturbations into data, preventing models from learning effectively. However, existing methods typically rely on deep neural networks as surrogate models for perturbation generation, resulting in significant computational costs. In this work, we propose Perturbation-Induced Linearization (PIL), a computationally efficient yet effective method that generates perturbations using only linear surrogate models. PIL achieves comparable or better performance than existing surrogate-based methods while reducing computational time dramatically. We further reveal a key mechanism underlying unlearnable examples: inducing linearization to deep models, which explains why PIL can achieve competitive results in a very short time. Beyond this, we provide an analysis about the property of unlearnable examples under percentage-based partial perturbation. Our work not only provides a practical approach for data protection but also offers insights into what makes unlearnable examples effective.

Country of Origin
πŸ‡¨πŸ‡³ China


Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)