Heterogeneous User Modeling for LLM-based Recommendation
By: Honghui Bao , Wenjie Wang , Xinyu Lin and more
Potential Business Impact:
Helps computers suggest things you'll like better.
Leveraging Large Language Models (LLMs) for recommendation has demonstrated notable success in various domains, showcasing their potential for open-domain recommendation. A key challenge to advancing open-domain recommendation lies in effectively modeling user preferences from users' heterogeneous behaviors across multiple domains. Existing approaches, including ID-based and semantic-based modeling, struggle with poor generalization, an inability to compress noisy interactions effectively, and the domain seesaw phenomenon. To address these challenges, we propose a Heterogeneous User Modeling (HUM) method, which incorporates a compression enhancer and a robustness enhancer for LLM-based recommendation. The compression enhancer uses a customized prompt to compress heterogeneous behaviors into a tailored token, while a masking mechanism enhances cross-domain knowledge extraction and understanding. The robustness enhancer introduces a domain importance score to mitigate the domain seesaw phenomenon by guiding domain optimization. Extensive experiments on heterogeneous datasets validate that HUM effectively models user heterogeneity by achieving both high efficacy and robustness, leading to superior performance in open-domain recommendation.
Similar Papers
A Comprehensive Review on Harnessing Large Language Models to Overcome Recommender System Challenges
Information Retrieval
Makes online suggestions smarter and more personal.
Multi-Modal Hypergraph Enhanced LLM Learning for Recommendation
Information Retrieval
Helps computers suggest better things you'll like.
Preserving Privacy and Utility in LLM-Based Product Recommendations
Information Retrieval
Keeps your private info safe while suggesting cool stuff.