Score: 0

Safeguarding LLM Fine-tuning via Push-Pull Distributional Alignment

Published: January 12, 2026 | arXiv ID: 2601.07200v1

By: Haozhong Wang , Zhuo Li , Yibo Yang and more

Potential Business Impact:

Keeps AI from learning bad things during training.

Business Areas:
Autonomous Vehicles Transportation

The inherent safety alignment of Large Language Models (LLMs) is prone to erosion during fine-tuning, even when using seemingly innocuous datasets. While existing defenses attempt to mitigate this via data selection, they typically rely on heuristic, instance-level assessments that neglect the global geometry of the data distribution and fail to explicitly repel harmful patterns. To address this, we introduce Safety Optimal Transport (SOT), a novel framework that reframes safe fine-tuning from an instance-level filtering challenge to a distribution-level alignment task grounded in Optimal Transport (OT). At its core is a dual-reference ``push-pull'' weight-learning mechanism: SOT optimizes sample importance by actively pulling the downstream distribution towards a trusted safe anchor while simultaneously pushing it away from a general harmful reference. This establishes a robust geometric safety boundary that effectively purifies the training data. Extensive experiments across diverse model families and domains demonstrate that SOT significantly enhances model safety while maintaining competitive downstream performance, achieving a superior safety-utility trade-off compared to baselines.

Country of Origin
🇨🇳 China

Page Count
20 pages

Category
Computer Science:
Machine Learning (CS)