Sparse Autoencoders for Sequential Recommendation Models: Interpretation and Flexible Control
By: Anton Klenitskiy , Konstantin Polev , Daria Denisova and more
Potential Business Impact:
Explains why computers suggest what they do.
Many current state-of-the-art models for sequential recommendations are based on transformer architectures. Interpretation and explanation of such black box models is an important research question, as a better understanding of their internals can help understand, influence, and control their behavior, which is very important in a variety of real-world applications. Recently sparse autoencoders (SAE) have been shown to be a promising unsupervised approach for extracting interpretable features from language models. These autoencoders learn to reconstruct hidden states of the transformer's internal layers from sparse linear combinations of directions in their activation space. This paper is focused on the application of SAE to the sequential recommendation domain. We show that this approach can be successfully applied to the transformer trained on a sequential recommendation task: learned directions turn out to be more interpretable and monosemantic than the original hidden state dimensions. Moreover, we demonstrate that the features learned by SAE can be used to effectively and flexibly control the model's behavior, providing end-users with a straightforward method to adjust their recommendations to different custom scenarios and contexts.
Similar Papers
Sparse Autoencoders Can Interpret Randomly Initialized Transformers
Machine Learning (CS)
Makes AI brains understandable, even random ones.
Probing the Representational Power of Sparse Autoencoders in Vision Models
CV and Pattern Recognition
Makes AI understand pictures better and create new ones.
Probing the Representational Power of Sparse Autoencoders in Vision Models
CV and Pattern Recognition
Makes AI understand pictures better.