Compressed Models are NOT Trust-equivalent to Their Large Counterparts
By: Rohit Raj Rai , Chirag Kothari , Siddhesh Shelke and more
Potential Business Impact:
Checks if smaller AI models think like bigger ones.
Large Deep Learning models are often compressed before being deployed in a resource-constrained environment. Can we trust the prediction of compressed models just as we trust the prediction of the original large model? Existing work has keenly studied the effect of compression on accuracy and related performance measures. However, performance parity does not guarantee trust-equivalence. We propose a two-dimensional framework for trust-equivalence evaluation. First, interpretability alignment measures whether the models base their predictions on the same input features. We use LIME and SHAP tests to measure the interpretability alignment. Second, calibration similarity measures whether the models exhibit comparable reliability in their predicted probabilities. It is assessed via ECE, MCE, Brier Score, and reliability diagrams. We conducted experiments using BERT-base as the large model and its multiple compressed variants. We focused on two text classification tasks: natural language inference and paraphrase identification. Our results reveal low interpretability alignment and significant mismatch in calibration similarity. It happens even when the accuracies are nearly identical between models. These findings show that compressed models are not trust-equivalent to their large counterparts. Deploying compressed models as a drop-in replacement for large models requires careful assessment, going beyond performance parity.
Similar Papers
Downsized and Compromised?: Assessing the Faithfulness of Model Compression
Machine Learning (CS)
Checks if smaller AI still acts like the big AI.
Model Compression vs. Adversarial Robustness: An Empirical Study on Language Models for Code
Software Engineering
Makes AI code checkers less safe when smaller.
Decomposed Trust: Exploring Privacy, Adversarial Robustness, Fairness, and Ethics of Low-Rank LLMs
Machine Learning (CS)
Makes AI safer and fairer after shrinking it.