Grad-ELLM: Gradient-based Explanations for Decoder-only LLMs
By: Xin Huang, Antoni B. Chan
Potential Business Impact:
Shows which words help AI give answers.
Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse tasks, yet their black-box nature raises concerns about transparency and faithfulness. Input attribution methods aim to highlight each input token's contributions to the model's output, but existing approaches are typically model-agnostic, and do not focus on transformer-specific architectures, leading to limited faithfulness. To address this, we propose Grad-ELLM, a gradient-based attribution method for decoder-only transformer-based LLMs. By aggregating channel importance from gradients of the output logit with respect to attention layers and spatial importance from attention maps, Grad-ELLM generates heatmaps at each generation step without requiring architectural modifications. Additionally, we introduce two faithfulneses metrics $π$-Soft-NC and $π$-Soft-NS, which are modifications of Soft-NC/NS that provide fairer comparisons by controlling the amount of information kept when perturbing the text. We evaluate Grad-ELLM on sentiment classification, question answering, and open-generation tasks using different models. Experiment results show that Grad-ELLM consistently achieves superior faithfulness than other attribution methods.
Similar Papers
Towards Efficient LLM-aware Heterogeneous Graph Learning
Computation and Language
Makes AI understand complex connections faster.
Last Layer Logits to Logic: Empowering LLMs with Logic-Consistent Structured Knowledge Reasoning
Computation and Language
Fixes computer "thinking" to be more logical.
RelayLLM: Efficient Reasoning via Collaborative Decoding
Computation and Language
Smart AI asks for help only when it's stuck.