Q-realign: Piggybacking Realignment on Quantization for Safe and Efficient LLM Deployment
By: Qitao Tan , Xiaoying Song , Ningxi Cheng and more
Public large language models (LLMs) are typically safety-aligned during pretraining, yet task-specific fine-tuning required for deployment often erodes this alignment and introduces safety risks. Existing defenses either embed safety recovery into fine-tuning or rely on fine-tuning-derived priors for post-hoc correction, leaving safety recovery tightly coupled with training and incurring high computational overhead and a complex workflow. To address these challenges, we propose \texttt{Q-realign}, a post-hoc defense method based on post-training quantization, guided by an analysis of representational structure. By reframing quantization as a dual-objective procedure for compression and safety, \texttt{Q-realign} decouples safety alignment from fine-tuning and naturally piggybacks into modern deployment pipelines. Experiments across multiple models and datasets demonstrate that our method substantially reduces unsafe behaviors while preserving task performance, with significant reductions in memory usage and GPU hours. Notably, our approach can recover the safety alignment of a fine-tuned 7B LLM on a single RTX 4090 within 40 minutes. Overall, our work provides a practical, turnkey solution for safety-aware deployment.
Similar Papers
Investigating the Impact of Quantization Methods on the Safety and Reliability of Large Language Models
Cryptography and Security
Makes AI safer when made smaller.
Rethinking Output Alignment For 1-bit Post-Training Quantization of Large Language Models
Machine Learning (CS)
Makes tiny AI models work almost as well.
Safety at One Shot: Patching Fine-Tuned LLMs with A Single Instance
Machine Learning (CS)
Fixes AI safety without hurting its smarts.