Safeguarding LLM Fine-tuning via Push-Pull Distributional Alignment
By: Haozhong Wang , Zhuo Li , Yibo Yang and more
Potential Business Impact:
Keeps AI from learning bad things during training.
The inherent safety alignment of Large Language Models (LLMs) is prone to erosion during fine-tuning, even when using seemingly innocuous datasets. While existing defenses attempt to mitigate this via data selection, they typically rely on heuristic, instance-level assessments that neglect the global geometry of the data distribution and fail to explicitly repel harmful patterns. To address this, we introduce Safety Optimal Transport (SOT), a novel framework that reframes safe fine-tuning from an instance-level filtering challenge to a distribution-level alignment task grounded in Optimal Transport (OT). At its core is a dual-reference ``push-pull'' weight-learning mechanism: SOT optimizes sample importance by actively pulling the downstream distribution towards a trusted safe anchor while simultaneously pushing it away from a general harmful reference. This establishes a robust geometric safety boundary that effectively purifies the training data. Extensive experiments across diverse model families and domains demonstrate that SOT significantly enhances model safety while maintaining competitive downstream performance, achieving a superior safety-utility trade-off compared to baselines.
Similar Papers
AsFT: Anchoring Safety During LLM Fine-Tuning Within Narrow Safety Basin
Machine Learning (CS)
Keeps AI safe from bad training data.
Safety at One Shot: Patching Fine-Tuned LLMs with A Single Instance
Machine Learning (CS)
Fixes AI safety without hurting its smarts.
Safety at One Shot: Patching Fine-Tuned LLMs with A Single Instance
Machine Learning (CS)
Fixes AI safety without hurting its smartness.