Score: 0

Compressed Models are NOT Trust-equivalent to Their Large Counterparts

Published: August 19, 2025 | arXiv ID: 2508.13533v1

By: Rohit Raj Rai , Chirag Kothari , Siddhesh Shelke and more

Potential Business Impact:

Checks if smaller AI models think like bigger ones.

Business Areas:
Simulation Software

Large Deep Learning models are often compressed before being deployed in a resource-constrained environment. Can we trust the prediction of compressed models just as we trust the prediction of the original large model? Existing work has keenly studied the effect of compression on accuracy and related performance measures. However, performance parity does not guarantee trust-equivalence. We propose a two-dimensional framework for trust-equivalence evaluation. First, interpretability alignment measures whether the models base their predictions on the same input features. We use LIME and SHAP tests to measure the interpretability alignment. Second, calibration similarity measures whether the models exhibit comparable reliability in their predicted probabilities. It is assessed via ECE, MCE, Brier Score, and reliability diagrams. We conducted experiments using BERT-base as the large model and its multiple compressed variants. We focused on two text classification tasks: natural language inference and paraphrase identification. Our results reveal low interpretability alignment and significant mismatch in calibration similarity. It happens even when the accuracies are nearly identical between models. These findings show that compressed models are not trust-equivalent to their large counterparts. Deploying compressed models as a drop-in replacement for large models requires careful assessment, going beyond performance parity.

Country of Origin
🇮🇳 India

Page Count
7 pages

Category
Computer Science:
Computation and Language