Score: 2

Sparse Autoencoders for Sequential Recommendation Models: Interpretation and Flexible Control

Published: July 16, 2025 | arXiv ID: 2507.12202v1

By: Anton Klenitskiy , Konstantin Polev , Daria Denisova and more

Potential Business Impact:

Explains why computers suggest what they do.

Business Areas:
Semantic Search Internet Services

Many current state-of-the-art models for sequential recommendations are based on transformer architectures. Interpretation and explanation of such black box models is an important research question, as a better understanding of their internals can help understand, influence, and control their behavior, which is very important in a variety of real-world applications. Recently sparse autoencoders (SAE) have been shown to be a promising unsupervised approach for extracting interpretable features from language models. These autoencoders learn to reconstruct hidden states of the transformer's internal layers from sparse linear combinations of directions in their activation space. This paper is focused on the application of SAE to the sequential recommendation domain. We show that this approach can be successfully applied to the transformer trained on a sequential recommendation task: learned directions turn out to be more interpretable and monosemantic than the original hidden state dimensions. Moreover, we demonstrate that the features learned by SAE can be used to effectively and flexibly control the model's behavior, providing end-users with a straightforward method to adjust their recommendations to different custom scenarios and contexts.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Information Retrieval