Score: 0

MTL-UE: Learning to Learn Nothing for Multi-Task Learning

Published: May 8, 2025 | arXiv ID: 2505.05279v1

By: Yi Yu , Song Xia , Siyuan Yang and more

Potential Business Impact:

Stops bad guys from training smart AI models.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Most existing unlearnable strategies focus on preventing unauthorized users from training single-task learning (STL) models with personal data. Nevertheless, the paradigm has recently shifted towards multi-task data and multi-task learning (MTL), targeting generalist and foundation models that can handle multiple tasks simultaneously. Despite their growing importance, MTL data and models have been largely neglected while pursuing unlearnable strategies. This paper presents MTL-UE, the first unified framework for generating unlearnable examples for multi-task data and MTL models. Instead of optimizing perturbations for each sample, we design a generator-based structure that introduces label priors and class-wise feature embeddings which leads to much better attacking performance. In addition, MTL-UE incorporates intra-task and inter-task embedding regularization to increase inter-class separation and suppress intra-class variance which enhances the attack robustness greatly. Furthermore, MTL-UE is versatile with good supports for dense prediction tasks in MTL. It is also plug-and-play allowing integrating existing surrogate-dependent unlearnable methods with little adaptation. Extensive experiments show that MTL-UE achieves superior attacking performance consistently across 4 MTL datasets, 3 base UE methods, 5 model backbones, and 5 MTL task-weighting strategies.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
19 pages

Category
Computer Science:
Machine Learning (CS)