Faithful and Fast Influence Function via Advanced Sampling
By: Jungyeon Koh , Hyeonsu Lyu , Jonggyu Jang and more
Potential Business Impact:
Helps understand why computer models make certain choices.
How can we explain the influence of training data on black-box models? Influence functions (IFs) offer a post-hoc solution by utilizing gradients and Hessians. However, computing the Hessian for an entire dataset is resource-intensive, necessitating a feasible alternative. A common approach involves randomly sampling a small subset of the training data, but this method often results in highly inconsistent IF estimates due to the high variance in sample configurations. To address this, we propose two advanced sampling techniques based on features and logits. These samplers select a small yet representative subset of the entire dataset by considering the stochastic distribution of features or logits, thereby enhancing the accuracy of IF estimations. We validate our approach through class removal experiments, a typical application of IFs, using the F1-score to measure how effectively the model forgets the removed class while maintaining inference consistency on the remaining classes. Our method reduces computation time by 30.1% and memory usage by 42.2%, or improves the F1-score by 2.5% compared to the baseline.
Similar Papers
Faithful and Fast Influence Function via Advanced Sampling
Machine Learning (CS)
Helps understand how computers learn from data.
IFFair: Influence Function-driven Sample Reweighting for Fair Classification
Machine Learning (CS)
Fixes unfair computer decisions by changing how data is used.
Rescaled Influence Functions: Accurate Data Attribution in High Dimension
Machine Learning (CS)
Finds bad training data that tricks computers.