When LRP Diverges from Leave-One-Out in Transformers
By: Weiqiu You , Siqi Zeng , Yao-Hung Hubert Tsai and more
Potential Business Impact:
Makes AI understand itself better.
Leave-One-Out (LOO) provides an intuitive measure of feature importance but is computationally prohibitive. While Layer-Wise Relevance Propagation (LRP) offers a potentially efficient alternative, its axiomatic soundness in modern Transformers remains largely under-examined. In this work, we first show that the bilinear propagation rules used in recent advances of AttnLRP violate the implementation invariance axiom. We prove this analytically and confirm it empirically in linear attention layers. Second, we also revisit CP-LRP as a diagnostic baseline and find that bypassing relevance propagation through the softmax layer -- backpropagating relevance only through the value matrices -- significantly improves alignment with LOO, particularly in middle-to-late Transformer layers. Overall, our results suggest that (i) bilinear factorization sensitivity and (ii) softmax propagation error potentially jointly undermine LRP's ability to approximate LOO in Transformers.
Similar Papers
Always Keep Your Promises: DynamicLRP, A Model-Agnostic Solution To Layer-Wise Relevance Propagation
Machine Learning (CS)
Explains AI decisions for any computer program.
Revisiting LRP: Positional Attribution as the Missing Ingredient for Transformer Explainability
Machine Learning (CS)
Shows why AI makes certain decisions.
Attribution-guided Pruning for Compression, Circuit Discovery, and Targeted Correction in LLMs
Machine Learning (CS)
Makes AI smarter and smaller, removing bad parts.