Supervised learning pays attention
By: Erin Craig, Robert Tibshirani
In-context learning with attention enables large neural networks to make context-specific predictions by selectively focusing on relevant examples. Here, we adapt this idea to supervised learning procedures such as lasso regression and gradient boosting, for tabular data. Our goals are to (1) flexibly fit personalized models for each prediction point and (2) retain model simplicity and interpretability. Our method fits a local model for each test observation by weighting the training data according to attention, a supervised similarity measure that emphasizes features and interactions that are predictive of the outcome. Attention weighting allows the method to adapt to heterogeneous data in a data-driven way, without requiring cluster or similarity pre-specification. Further, our approach is uniquely interpretable: for each test observation, we identify which features are most predictive and which training observations are most relevant. We then show how to use attention weighting for time series and spatial data, and we present a method for adapting pretrained tree-based models to distributional shift using attention-weighted residual corrections. Across real and simulated datasets, attention weighting improves predictive performance while preserving interpretability, and theory shows that attention-weighting linear models attain lower mean squared error than the standard linear model under mixture-of-models data-generating processes with known subgroup structure.
Similar Papers
Towards a Relationship-Aware Transformer for Tabular Data
Machine Learning (CS)
Helps computers learn from related data better.
AttnBoost: Retail Supply Chain Sales Insights via Gradient Boosting Perspective
Machine Learning (CS)
Helps stores guess what people will buy.
Integrating attention into explanation frameworks for language and vision transformers
Machine Learning (CS)
Shows how computers understand things by looking at what's important.