MTL-UE: Learning to Learn Nothing for Multi-Task Learning
By: Yi Yu , Song Xia , Siyuan Yang and more
Potential Business Impact:
Stops bad guys from training smart AI models.
Most existing unlearnable strategies focus on preventing unauthorized users from training single-task learning (STL) models with personal data. Nevertheless, the paradigm has recently shifted towards multi-task data and multi-task learning (MTL), targeting generalist and foundation models that can handle multiple tasks simultaneously. Despite their growing importance, MTL data and models have been largely neglected while pursuing unlearnable strategies. This paper presents MTL-UE, the first unified framework for generating unlearnable examples for multi-task data and MTL models. Instead of optimizing perturbations for each sample, we design a generator-based structure that introduces label priors and class-wise feature embeddings which leads to much better attacking performance. In addition, MTL-UE incorporates intra-task and inter-task embedding regularization to increase inter-class separation and suppress intra-class variance which enhances the attack robustness greatly. Furthermore, MTL-UE is versatile with good supports for dense prediction tasks in MTL. It is also plug-and-play allowing integrating existing surrogate-dependent unlearnable methods with little adaptation. Extensive experiments show that MTL-UE achieves superior attacking performance consistently across 4 MTL datasets, 3 base UE methods, 5 model backbones, and 5 MTL task-weighting strategies.
Similar Papers
How Far Are We from True Unlearnability?
Machine Learning (CS)
Protects data so computers can't learn from it.
T2UE: Generating Unlearnable Examples from Text Descriptions
Artificial Intelligence
Protects your private pictures using only words.
Leveraging Multi-Task Learning for Multi-Label Power System Security Assessment
Systems and Control
Checks power grids for problems faster and better.