Contrastive Knowledge Transfer and Robust Optimization for Secure Alignment of Large Language Models
By: Jiasen Zheng , Huajun Zhang , Xu Yan and more
Potential Business Impact:
Makes AI safer and more reliable.
This paper addresses the limitations of large-scale language models in safety alignment and robustness by proposing a fine-tuning method that combines contrastive distillation with noise-robust training. The method freezes the backbone model and transfers the knowledge boundaries of the teacher model to the student model through distillation, thereby improving semantic consistency and alignment accuracy. At the same time, noise perturbations and robust optimization constraints are introduced during training to ensure that the model maintains stable predictive outputs under noisy and uncertain inputs. The overall framework consists of distillation loss, robustness loss, and a regularization term, forming a unified optimization objective that balances alignment ability with resistance to interference. To systematically validate its effectiveness, the study designs experiments from multiple perspectives, including distillation weight sensitivity, stability analysis under computation budgets and mixed-precision environments, and the impact of data noise and distribution shifts on model performance. Results show that the method significantly outperforms existing baselines in knowledge transfer, robustness, and overall safety, achieving the best performance across several key metrics. This work not only enriches the theoretical system of parameter-efficient fine-tuning but also provides a new solution for building safer and more trustworthy alignment mechanisms.
Similar Papers
Parameter-Efficient Fine-Tuning with Differential Privacy for Robust Instruction Adaptation in Large Language Models
Computation and Language
Keeps AI learning private and fast.
Unforgotten Safety: Preserving Safety Alignment of Large Language Models with Continual Learning
Computation and Language
Keeps smart computer programs safe when learning new things.
KL-based self-distillation for large language models
Computation and Language
Teaches computers new words without forgetting old ones.