Enhancing Recommender Systems Using Textual Embeddings from Pre-trained Language Models
By: Ngoc Luyen Le, Marie-Hélène Abel
Potential Business Impact:
Makes movie suggestions understand what you like.
Recent advancements in language models and pre-trained language models like BERT and RoBERTa have revolutionized natural language processing, enabling a deeper understanding of human-like language. In this paper, we explore enhancing recommender systems using textual embeddings from pre-trained language models to address the limitations of traditional recommender systems that rely solely on explicit features from users, items, and user-item interactions. By transforming structured data into natural language representations, we generate high-dimensional embeddings that capture deeper semantic relationships between users, items, and contexts. Our experiments demonstrate that this approach significantly improves recommendation accuracy and relevance, resulting in more personalized and context-aware recommendations. The findings underscore the potential of PLMs to enhance the effectiveness of recommender systems.
Similar Papers
Architecture is All You Need: Improving LLM Recommenders by Dropping the Text
Information Retrieval
Makes movie suggestions better with less computer power.
Knowledge-augmented Pre-trained Language Models for Biomedical Relation Extraction
Computation and Language
Helps computers find connections in science papers.
End-to-End Personalization: Unifying Recommender Systems with Large Language Models
Information Retrieval
Suggests movies you'll love, explains why.