Score: 0

Disrupting Model Merging: A Parameter-Level Defense Without Sacrificing Accuracy

Published: March 8, 2025 | arXiv ID: 2503.07661v2

By: Wei Junhao, Yu Zhe, Sakuma Jun

Potential Business Impact:

Stops others from stealing AI's special skills.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Model merging is a technique that combines multiple finetuned models into a single model without additional training, allowing a free-rider to cheaply inherit specialized capabilities. This study investigates methodologies to suppress unwanted model merging by free-riders. Existing methods such as model watermarking or fingerprinting can only detect merging in hindsight. In contrast, we propose a first proactive defense against model merging. Specifically, our defense method modifies the model parameters so that the model is disrupted if the model is merged with any other model, while its functionality is kept unchanged if not merged with others. Our approach consists of two modules, rearranging MLP parameters and scaling attention heads, which push the model out of the shared basin in parameter space, causing the merging performance with other models to degrade significantly. We conduct extensive experiments on image classification, image generation, and text classification to demonstrate that our defense severely disrupts merging while retaining the functionality of the post-protect model. Moreover, we analyze potential adaptive attacks and further propose a dropout-based pruning to improve our proposal's robustness.

Country of Origin
🇯🇵 Japan

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)