Closing the Performance Gap in Generative Recommenders with Collaborative Tokenization and Efficient Modeling
By: Simon Lepage, Jeremie Mary, David Picard
Potential Business Impact:
Makes movie suggestions better by understanding what you like.
Recent work has explored generative recommender systems as an alternative to traditional ID-based models, reframing item recommendation as a sequence generation task over discrete item tokens. While promising, such methods often underperform in practice compared to well-tuned ID-based baselines like SASRec. In this paper, we identify two key limitations holding back generative approaches: the lack of collaborative signal in item tokenization, and inefficiencies in the commonly used encoder-decoder architecture. To address these issues, we introduce COSETTE, a contrastive tokenization method that integrates collaborative information directly into the learned item representations, jointly optimizing for both content reconstruction and recommendation relevance. Additionally, we propose MARIUS, a lightweight, audio-inspired generative model that decouples timeline modeling from item decoding. MARIUS reduces inference cost while improving recommendation accuracy. Experiments on standard sequential recommendation benchmarks show that our approach narrows, or even eliminates, the performance gap between generative and modern ID-based models, while retaining the benefits of the generative paradigm.
Similar Papers
CoFiRec: Coarse-to-Fine Tokenization for Generative Recommendation
Information Retrieval
Helps online shoppers find exactly what they want.
Universal Item Tokenization for Transferable Generative Recommendation
Information Retrieval
Recommends items by understanding their pictures and words.
Learning Decomposed Contextual Token Representations from Pretrained and Collaborative Signals for Generative Recommendation
Information Retrieval
Makes online suggestions smarter by understanding user choices.