Pruning Weights but Not Truth: Safeguarding Truthfulness While Pruning LLMs
By: Yao Fu , Runchao Li , Xianxuan Long and more
Potential Business Impact:
Keeps AI honest after making it smaller.
Neural network pruning has emerged as a promising approach for deploying LLMs in low-resource scenarios while preserving downstream task performance. However, for the first time, we reveal that such pruning disrupts LLMs' internal activation features crucial for lie detection, where probing classifiers (typically small logistic regression models) trained on these features assess the truthfulness of LLM-generated statements. This discovery raises a crucial open question: how can we prune LLMs without sacrificing these critical lie detection capabilities? Our investigation further reveals that naively adjusting layer-wise pruning sparsity based on importance inadvertently removes crucial weights, failing to improve lie detection performance despite its reliance on the most crucial LLM layer. To address this issue, we propose Truthful Pruning aligned by Layer-wise Outliers (TPLO), which places greater emphasis on layers with more activation outliers and stronger discriminative features simultaneously. This preserves LLMs' original performance while retaining critical features of inner states needed for robust lie detection. Moreover, we introduce a prompting rule to enrich the TruthfulQA benchmark for better calibrating LLM pruning. Empirical results show that our approach improves the hallucination detection for pruned LLMs (achieving 88% accuracy at 50% sparsity) and enhances their performance on TruthfulQA.
Similar Papers
Fewer Weights, More Problems: A Practical Attack on LLM Pruning
Machine Learning (CS)
Pruning AI models can hide bad behavior.
SparseSwaps: Tractable LLM Pruning Mask Refinement at Scale
Machine Learning (CS)
Makes big computer brains smaller, faster, and smarter.
Breaking Expert Knowledge Limits: Self-Pruning for Large Language Models
CV and Pattern Recognition
Lets computers shrink themselves to work better.