EncodeRec: An Embedding Backbone for Recommendation Systems
By: Guy Hadad, Neomi Rabaev, Bracha Shapira
Potential Business Impact:
Makes online shopping suggestions much better.
Recent recommender systems increasingly leverage embeddings from large pre-trained language models (PLMs). However, such embeddings exhibit two key limitations: (1) PLMs are not explicitly optimized to produce structured and discriminative embedding spaces, and (2) their representations remain overly generic, often failing to capture the domain-specific semantics crucial for recommendation tasks. We present EncodeRec, an approach designed to align textual representations with recommendation objectives while learning compact, informative embeddings directly from item descriptions. EncodeRec keeps the language model parameters frozen during recommender system training, making it computationally efficient without sacrificing semantic fidelity. Experiments across core recommendation benchmarks demonstrate its effectiveness both as a backbone for sequential recommendation models and for semantic ID tokenization, showing substantial gains over PLM-based and embedding model baselines. These results underscore the pivotal role of embedding adaptation in bridging the gap between general-purpose language models and practical recommender systems.
Similar Papers
DenseRec: Revisiting Dense Content Embeddings for Sequential Transformer-based Recommendation
Information Retrieval
Helps online stores suggest new items better.
Enhancing Recommender Systems Using Textual Embeddings from Pre-trained Language Models
Information Retrieval
Makes movie suggestions understand what you like.
A Plug-and-play Model-agnostic Embedding Enhancement Approach for Explainable Recommendation
Information Retrieval
Makes movie suggestions more personal and understandable.