MASE: Interpretable NLP Models via Model-Agnostic Saliency Estimation
By: Zhou Yang , Shunyan Luo , Jiazhen Zhu and more
Potential Business Impact:
Shows how computer words make decisions.
Deep neural networks (DNNs) have made significant strides in Natural Language Processing (NLP), yet their interpretability remains elusive, particularly when evaluating their intricate decision-making processes. Traditional methods often rely on post-hoc interpretations, such as saliency maps or feature visualization, which might not be directly applicable to the discrete nature of word data in NLP. Addressing this, we introduce the Model-agnostic Saliency Estimation (MASE) framework. MASE offers local explanations for text-based predictive models without necessitating in-depth knowledge of a model's internal architecture. By leveraging Normalized Linear Gaussian Perturbations (NLGP) on the embedding layer instead of raw word inputs, MASE efficiently estimates input saliency. Our results indicate MASE's superiority over other model-agnostic interpretation methods, especially in terms of Delta Accuracy, positioning it as a promising tool for elucidating the operations of text-based models in NLP.
Similar Papers
SAGE: Saliency-Guided Contrastive Embeddings
CV and Pattern Recognition
Teaches computers to see what humans see.
Now you see me! A framework for obtaining class-relevant saliency maps
CV and Pattern Recognition
Shows computers *why* they made a choice.
SALMAN: Stability Analysis of Language Models Through the Maps Between Graph-based Manifolds
Machine Learning (CS)
Makes smart computer words more trustworthy.