Enhancing News Recommendation with Hierarchical LLM Prompting
By: Hai-Dang Kieu , Delvin Ce Zhang , Minh Duc Nguyen and more
Potential Business Impact:
Makes news apps show you stories you'll love.
Personalized news recommendation systems often struggle to effectively capture the complexity of user preferences, as they rely heavily on shallow representations, such as article titles and abstracts. To address this problem, we introduce a novel method, namely PNR-LLM, for Large Language Models for Personalized News Recommendation. Specifically, PNR-LLM harnesses the generation capabilities of LLMs to enrich news titles and abstracts, and consequently improves recommendation quality. PNR-LLM contains a novel module, News Enrichment via LLMs, which generates deeper semantic information and relevant entities from articles, transforming shallow contents into richer representations. We further propose an attention mechanism to aggregate enriched semantic- and entity-level data, forming unified user and news embeddings that reveal a more accurate user-news match. Extensive experiments on MIND datasets show that PNR-LLM outperforms state-of-the-art baselines. Moreover, the proposed data enrichment module is model-agnostic, and we empirically show that applying our proposed module to multiple existing models can further improve their performance, verifying the advantage of our design.
Similar Papers
Semantic Mastery: Enhancing LLMs with Advanced Natural Language Understanding
Computation and Language
Makes AI understand and talk like people.
Multi-Modal Hypergraph Enhanced LLM Learning for Recommendation
Information Retrieval
Helps computers suggest better things you'll like.
User Feedback Alignment for LLM-powered Exploration in Large-scale Recommendation Systems
Information Retrieval
Finds new videos you'll like, not just favorites.