Compact Example-Based Explanations for Language Models
By: Loris Schoenegger, Benjamin Roth
Potential Business Impact:
Shows which training examples best explain computer answers.
Training data influence estimation methods quantify the contribution of training documents to a model's output, making them a promising source of information for example-based explanations. As humans cannot interpret thousands of documents, only a small subset of the training data can be presented as an explanation. Although the choice of which documents to include directly affects explanation quality, previous evaluations of such systems have largely ignored any selection strategies. To address this, we propose a novel selection relevance score, a retraining-free metric that quantifies how useful a set of examples is for explaining a model's output. We validate this score through fine-tuning experiments, confirming that it can predict whether a set of examples supports or undermines the model's predictions. Using this metric, we further show that common selection strategies often underperform random selection. Motivated by this finding, we propose a strategy that balances influence and representativeness, enabling better use of selection budgets than naively selecting the highest-ranking examples.
Similar Papers
Machine Learning from Explanations
Machine Learning (CS)
Teaches computers to learn with fewer examples.
Influence-driven Curriculum Learning for Pre-training on Limited Data
Computation and Language
Teaches computers to learn faster by sorting lessons.
Improving Influence-based Instruction Tuning Data Selection for Balanced Learning of Diverse Capabilities
Computation and Language
Helps AI learn many different things well.