Language Models as Semantic Augmenters for Sequential Recommenders
By: Mahsa Valizadeh , Xiangjue Dong , Rui Tuo and more
Potential Business Impact:
Makes computer predictions better by adding more meaning.
Large Language Models (LLMs) excel at capturing latent semantics and contextual relationships across diverse modalities. However, in modeling user behavior from sequential interaction data, performance often suffers when such semantic context is limited or absent. We introduce LaMAR, a LLM-driven semantic enrichment framework designed to enrich such sequences automatically. LaMAR leverages LLMs in a few-shot setting to generate auxiliary contextual signals by inferring latent semantic aspects of a user's intent and item relationships from existing metadata. These generated signals, such as inferred usage scenarios, item intents, or thematic summaries, augment the original sequences with greater contextual depth. We demonstrate the utility of this generated resource by integrating it into benchmark sequential modeling tasks, where it consistently improves performance. Further analysis shows that LLM-generated signals exhibit high semantic novelty and diversity, enhancing the representational capacity of the downstream models. This work represents a new data-centric paradigm where LLMs serve as intelligent context generators, contributing a new method for the semi-automatic creation of training data and language resources.
Similar Papers
Using LLMs to Capture Users' Temporal Context for Recommendation
Information Retrieval
Helps apps learn what you like, now and later.
LLMSeR: Enhancing Sequential Recommendation via LLM-based Data Augmentation
Information Retrieval
Helps online stores suggest better items for everyone.
Generation and annotation of item usage scenarios in e-commerce using large language models
Information Retrieval
Helps online stores suggest items that go together.