Efficient Sequential Recommendation for Long Term User Interest Via Personalization
By: Qiang Zhang , Hanchao Yu , Ivan Ji and more
Potential Business Impact:
Makes movie suggestions faster and better.
Recent years have witnessed success of sequential modeling, generative recommender, and large language model for recommendation. Though the scaling law has been validated for sequential models, it showed inefficiency in computational capacity when considering real-world applications like recommendation, due to the non-linear(quadratic) increasing nature of the transformer model. To improve the efficiency of the sequential model, we introduced a novel approach to sequential recommendation that leverages personalization techniques to enhance efficiency and performance. Our method compresses long user interaction histories into learnable tokens, which are then combined with recent interactions to generate recommendations. This approach significantly reduces computational costs while maintaining high recommendation accuracy. Our method could be applied to existing transformer based recommendation models, e.g., HSTU and HLLM. Extensive experiments on multiple sequential models demonstrate its versatility and effectiveness. Source code is available at \href{https://github.com/facebookresearch/PerSRec}{https://github.com/facebookresearch/PerSRec}.
Similar Papers
Revisiting Self-Attentive Sequential Recommendation
Information Retrieval
Helps websites show you things you'll like.
Massive Memorization with Hundreds of Trillions of Parameters for Sequential Transducer Generative Recommenders
Information Retrieval
Makes online suggestions faster with long histories.
Leveraging Historical and Current Interests for Continual Sequential Recommendation
Information Retrieval
Keeps online shopping suggestions smart over time.